Test Report: QEMU_macOS 19712

                    
                      c4dd788a1c1ea09a0f3bb20836a8b75126e684b1:2024-09-27:36398
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.57
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.26
22 TestOffline 9.87
33 TestAddons/parallel/Registry 71.3
45 TestCertOptions 10.41
46 TestCertExpiration 195.32
47 TestDockerFlags 10.06
48 TestForceSystemdFlag 10.32
49 TestForceSystemdEnv 12.24
94 TestFunctional/parallel/ServiceCmdConnect 41.83
166 TestMultiControlPlane/serial/StopSecondaryNode 64.14
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.93
168 TestMultiControlPlane/serial/RestartSecondaryNode 82.99
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 202.07
174 TestMultiControlPlane/serial/RestartCluster 5.24
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 10.05
183 TestJSONOutput/start/Command 10.01
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.04
212 TestMinikubeProfile 10.3
215 TestMountStart/serial/StartWithMountFirst 10.16
218 TestMultiNode/serial/FreshStart2Nodes 10.04
219 TestMultiNode/serial/DeployApp2Nodes 114.61
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.07
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 45.67
227 TestMultiNode/serial/RestartKeepsNodes 8.54
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 2.12
230 TestMultiNode/serial/RestartMultiNode 5.25
231 TestMultiNode/serial/ValidateNameConflict 20.46
235 TestPreload 10.04
237 TestScheduledStopUnix 10.08
238 TestSkaffold 13.32
241 TestRunningBinaryUpgrade 600.57
243 TestKubernetesUpgrade 17.26
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.78
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.21
259 TestStoppedBinaryUpgrade/Upgrade 563.31
261 TestPause/serial/Start 10.07
271 TestNoKubernetes/serial/StartWithK8s 9.94
272 TestNoKubernetes/serial/StartWithStopK8s 5.3
273 TestNoKubernetes/serial/Start 5.3
277 TestNoKubernetes/serial/StartNoArgs 5.35
279 TestNetworkPlugins/group/auto/Start 9.9
280 TestNetworkPlugins/group/kindnet/Start 9.83
281 TestNetworkPlugins/group/calico/Start 9.8
282 TestNetworkPlugins/group/custom-flannel/Start 9.88
283 TestNetworkPlugins/group/false/Start 9.74
284 TestNetworkPlugins/group/enable-default-cni/Start 9.86
285 TestNetworkPlugins/group/flannel/Start 9.8
286 TestNetworkPlugins/group/bridge/Start 9.96
288 TestNetworkPlugins/group/kubenet/Start 9.97
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.78
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
297 TestStartStop/group/no-preload/serial/FirstStart 10.2
298 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/old-k8s-version/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 9.98
304 TestStartStop/group/no-preload/serial/DeployApp 0.09
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/no-preload/serial/SecondStart 5.28
309 TestStartStop/group/embed-certs/serial/DeployApp 0.09
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
313 TestStartStop/group/embed-certs/serial/SecondStart 5.27
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
317 TestStartStop/group/no-preload/serial/Pause 0.1
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.02
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
323 TestStartStop/group/embed-certs/serial/Pause 0.1
325 TestStartStop/group/newest-cni/serial/FirstStart 10.02
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.4
335 TestStartStop/group/newest-cni/serial/SecondStart 5.26
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (27.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-196000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-196000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (27.563904917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9cf5a4c-c24b-4a32-a0f8-66a3157af399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-196000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0a04491-9741-4e33-b998-aec432a4d1fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"388fe497-be42-4d7e-8d2f-4fe63adff0e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig"}}
	{"specversion":"1.0","id":"39a338c4-587a-485f-81b7-c44489437080","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d2460ba2-9bf7-4210-8b88-c19c1126e3e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"530ef004-532f-4472-80c8-fe62776c0194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube"}}
	{"specversion":"1.0","id":"8d48defa-c8ca-4311-aa76-114309dce1ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"557caf03-2856-4f56-a278-8a5d371e43d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"17f7ec91-5880-491d-a7d1-68f130ba77c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"dc24fee5-4007-4999-973f-ea0fb8276bf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"95d71b06-e589-41b8-8769-a98f8ea10e30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-196000\" primary control-plane node in \"download-only-196000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ebe764f-9ec7-454d-87fd-db2c80b64932","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"447b730c-3cb6-417a-8ccd-186839cc9d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0] Decompressors:map[bz2:0x1400015fd00 gz:0x1400015fd08 tar:0x1400015fc40 tar.bz2:0x1400015fc50 tar.gz:0x1400015fc60 tar.xz:0x1400015fc90 tar.zst:0x1400015fce0 tbz2:0x1400015fc50 tgz:0x14
00015fc60 txz:0x1400015fc90 tzst:0x1400015fce0 xz:0x1400015fd10 zip:0x1400015fd20 zst:0x1400015fd18] Getters:map[file:0x1400136e8a0 http:0x140000b8c30 https:0x140000b8dc0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f0a424a1-09b9-416b-a8e0-fad7e863c632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 09:54:58.791854    2040 out.go:345] Setting OutFile to fd 1 ...
	I0927 09:54:58.791998    2040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:54:58.792002    2040 out.go:358] Setting ErrFile to fd 2...
	I0927 09:54:58.792005    2040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:54:58.792126    2040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	W0927 09:54:58.792233    2040 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19712-1508/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19712-1508/.minikube/config/config.json: no such file or directory
	I0927 09:54:58.793499    2040 out.go:352] Setting JSON to true
	I0927 09:54:58.810578    2040 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1462,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 09:54:58.810641    2040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 09:54:58.816428    2040 out.go:97] [download-only-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 09:54:58.816559    2040 notify.go:220] Checking for updates...
	W0927 09:54:58.816620    2040 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 09:54:58.819455    2040 out.go:169] MINIKUBE_LOCATION=19712
	I0927 09:54:58.826403    2040 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 09:54:58.831425    2040 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 09:54:58.835414    2040 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 09:54:58.837007    2040 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	W0927 09:54:58.843401    2040 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 09:54:58.843634    2040 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 09:54:58.848512    2040 out.go:97] Using the qemu2 driver based on user configuration
	I0927 09:54:58.848536    2040 start.go:297] selected driver: qemu2
	I0927 09:54:58.848553    2040 start.go:901] validating driver "qemu2" against <nil>
	I0927 09:54:58.848638    2040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 09:54:58.852412    2040 out.go:169] Automatically selected the socket_vmnet network
	I0927 09:54:58.858195    2040 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0927 09:54:58.858323    2040 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 09:54:58.858369    2040 cni.go:84] Creating CNI manager for ""
	I0927 09:54:58.858420    2040 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 09:54:58.858473    2040 start.go:340] cluster config:
	{Name:download-only-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 09:54:58.864044    2040 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 09:54:58.867431    2040 out.go:97] Downloading VM boot image ...
	I0927 09:54:58.867457    2040 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0927 09:55:14.187682    2040 out.go:97] Starting "download-only-196000" primary control-plane node in "download-only-196000" cluster
	I0927 09:55:14.187711    2040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 09:55:14.250400    2040 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 09:55:14.250423    2040 cache.go:56] Caching tarball of preloaded images
	I0927 09:55:14.250604    2040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 09:55:14.253765    2040 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 09:55:14.253771    2040 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0927 09:55:14.343117    2040 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 09:55:25.021652    2040 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0927 09:55:25.021849    2040 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0927 09:55:25.717868    2040 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0927 09:55:25.718068    2040 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/download-only-196000/config.json ...
	I0927 09:55:25.718085    2040 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/download-only-196000/config.json: {Name:mk1b8dc3dd5838cefe8bb7629d424dc90e128c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:55:25.718361    2040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 09:55:25.718555    2040 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0927 09:55:26.285258    2040 out.go:193] 
	W0927 09:55:26.291303    2040 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0] Decompressors:map[bz2:0x1400015fd00 gz:0x1400015fd08 tar:0x1400015fc40 tar.bz2:0x1400015fc50 tar.gz:0x1400015fc60 tar.xz:0x1400015fc90 tar.zst:0x1400015fce0 tbz2:0x1400015fc50 tgz:0x1400015fc60 txz:0x1400015fc90 tzst:0x1400015fce0 xz:0x1400015fd10 zip:0x1400015fd20 zst:0x1400015fd18] Getters:map[file:0x1400136e8a0 http:0x140000b8c30 https:0x140000b8dc0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0927 09:55:26.291329    2040 out_reason.go:110] 
	W0927 09:55:26.297185    2040 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 09:55:26.301170    2040 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-196000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (27.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.26s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 09:55:35.601207    2039 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-750000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-750000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 : exit status 40 (156.369291ms)

                                                
                                                
-- stdout --
	* [binary-mirror-750000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-750000" primary control-plane node in "binary-mirror-750000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 09:55:35.660714    2113 out.go:345] Setting OutFile to fd 1 ...
	I0927 09:55:35.660861    2113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:55:35.660865    2113 out.go:358] Setting ErrFile to fd 2...
	I0927 09:55:35.660867    2113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:55:35.661000    2113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 09:55:35.662137    2113 out.go:352] Setting JSON to false
	I0927 09:55:35.678343    2113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1499,"bootTime":1727454636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 09:55:35.678444    2113 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 09:55:35.682933    2113 out.go:177] * [binary-mirror-750000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 09:55:35.689936    2113 notify.go:220] Checking for updates...
	I0927 09:55:35.693861    2113 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 09:55:35.696872    2113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 09:55:35.698378    2113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 09:55:35.702875    2113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 09:55:35.706837    2113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 09:55:35.710041    2113 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 09:55:35.713873    2113 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 09:55:35.719858    2113 start.go:297] selected driver: qemu2
	I0927 09:55:35.719865    2113 start.go:901] validating driver "qemu2" against <nil>
	I0927 09:55:35.719920    2113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 09:55:35.723868    2113 out.go:177] * Automatically selected the socket_vmnet network
	I0927 09:55:35.730132    2113 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0927 09:55:35.730219    2113 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 09:55:35.730241    2113 cni.go:84] Creating CNI manager for ""
	I0927 09:55:35.730270    2113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 09:55:35.730276    2113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 09:55:35.730322    2113 start.go:340] cluster config:
	{Name:binary-mirror-750000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-750000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49312 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 09:55:35.733932    2113 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 09:55:35.741858    2113 out.go:177] * Starting "binary-mirror-750000" primary control-plane node in "binary-mirror-750000" cluster
	I0927 09:55:35.745807    2113 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 09:55:35.745820    2113 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 09:55:35.745831    2113 cache.go:56] Caching tarball of preloaded images
	I0927 09:55:35.745900    2113 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 09:55:35.745905    2113 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 09:55:35.746124    2113 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/binary-mirror-750000/config.json ...
	I0927 09:55:35.746136    2113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/binary-mirror-750000/config.json: {Name:mka874b5eb2dcb8eca3843d2a8abfa732f6cf924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:55:35.746485    2113 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 09:55:35.746537    2113 download.go:107] Downloading: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0927 09:55:35.765939    2113 out.go:201] 
	W0927 09:55:35.769927    2113 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0] Decompressors:map[bz2:0x1400012c1f0 gz:0x1400012c1f8 tar:0x1400012c180 tar.bz2:0x1400012c190 tar.gz:0x1400012c1a0 tar.xz:0x1400012c1b0 tar.zst:0x1400012c1c0 tbz2:0x1400012c190 tgz:0x1400012c1a0 txz:0x1400012c1b0 tzst:0x1400012c1c0 xz:0x1400012c200 zip:0x1400012c210 zst:0x1400012c208] Getters:map[file:0x14000201410 http:0x14000672e60 https:0x14000672eb0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0 0x1067d96c0] Decompressors:map[bz2:0x1400012c1f0 gz:0x1400012c1f8 tar:0x1400012c180 tar.bz2:0x1400012c190 tar.gz:0x1400012c1a0 tar.xz:0x1400012c1b0 tar.zst:0x1400012c1c0 tbz2:0x1400012c190 tgz:0x1400012c1a0 txz:0x1400012c1b0 tzst:0x1400012c1c0 xz:0x1400012c200 zip:0x1400012c210 zst:0x1400012c208] Getters:map[file:0x14000201410 http:0x14000672e60 https:0x14000672eb0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0927 09:55:35.769933    2113 out.go:270] * 
	* 
	W0927 09:55:35.770417    2113 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 09:55:35.783839    2113 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-750000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49312" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-750000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-750000
--- FAIL: TestBinaryMirror (0.26s)

                                                
                                    
x
+
TestOffline (9.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-614000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-614000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.716811334s)

                                                
                                                
-- stdout --
	* [offline-docker-614000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-614000" primary control-plane node in "offline-docker-614000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-614000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:34:46.083821    4705 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:34:46.083956    4705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:34:46.083959    4705 out.go:358] Setting ErrFile to fd 2...
	I0927 10:34:46.083962    4705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:34:46.084091    4705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:34:46.085307    4705 out.go:352] Setting JSON to false
	I0927 10:34:46.102838    4705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3850,"bootTime":1727454636,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:34:46.102910    4705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:34:46.108633    4705 out.go:177] * [offline-docker-614000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:34:46.116621    4705 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:34:46.116654    4705 notify.go:220] Checking for updates...
	I0927 10:34:46.123555    4705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:34:46.126572    4705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:34:46.129573    4705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:34:46.132579    4705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:34:46.135704    4705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:34:46.138851    4705 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:34:46.138907    4705 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:34:46.142532    4705 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:34:46.149464    4705 start.go:297] selected driver: qemu2
	I0927 10:34:46.149473    4705 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:34:46.149484    4705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:34:46.151477    4705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:34:46.154497    4705 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:34:46.157670    4705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:34:46.157685    4705 cni.go:84] Creating CNI manager for ""
	I0927 10:34:46.157711    4705 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:34:46.157715    4705 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:34:46.157745    4705 start.go:340] cluster config:
	{Name:offline-docker-614000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:34:46.161309    4705 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:46.168598    4705 out.go:177] * Starting "offline-docker-614000" primary control-plane node in "offline-docker-614000" cluster
	I0927 10:34:46.172422    4705 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:34:46.172453    4705 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:34:46.172463    4705 cache.go:56] Caching tarball of preloaded images
	I0927 10:34:46.172554    4705 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:34:46.172559    4705 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:34:46.172621    4705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/offline-docker-614000/config.json ...
	I0927 10:34:46.172631    4705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/offline-docker-614000/config.json: {Name:mk020c66d884db15ffb85747425ff46c38195446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:34:46.172865    4705 start.go:360] acquireMachinesLock for offline-docker-614000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:34:46.172899    4705 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "offline-docker-614000"
	I0927 10:34:46.172912    4705 start.go:93] Provisioning new machine with config: &{Name:offline-docker-614000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:34:46.172945    4705 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:34:46.180375    4705 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:34:46.196374    4705 start.go:159] libmachine.API.Create for "offline-docker-614000" (driver="qemu2")
	I0927 10:34:46.196416    4705 client.go:168] LocalClient.Create starting
	I0927 10:34:46.196487    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:34:46.196517    4705 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:46.196525    4705 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:46.196570    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:34:46.196595    4705 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:46.196606    4705 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:46.196982    4705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:34:46.356956    4705 main.go:141] libmachine: Creating SSH key...
	I0927 10:34:46.392072    4705 main.go:141] libmachine: Creating Disk image...
	I0927 10:34:46.392078    4705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:34:46.392262    4705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2
	I0927 10:34:46.407765    4705 main.go:141] libmachine: STDOUT: 
	I0927 10:34:46.407791    4705 main.go:141] libmachine: STDERR: 
	I0927 10:34:46.407867    4705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2 +20000M
	I0927 10:34:46.416410    4705 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:34:46.416428    4705 main.go:141] libmachine: STDERR: 
	I0927 10:34:46.416454    4705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2
	I0927 10:34:46.416459    4705 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:34:46.416469    4705 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:34:46.416496    4705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:88:23:98:82:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2
	I0927 10:34:46.418527    4705 main.go:141] libmachine: STDOUT: 
	I0927 10:34:46.418557    4705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:34:46.418583    4705 client.go:171] duration metric: took 222.167125ms to LocalClient.Create
	I0927 10:34:48.420661    4705 start.go:128] duration metric: took 2.247765667s to createHost
	I0927 10:34:48.420680    4705 start.go:83] releasing machines lock for "offline-docker-614000", held for 2.247835084s
	W0927 10:34:48.420691    4705 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:48.430242    4705 out.go:177] * Deleting "offline-docker-614000" in qemu2 ...
	W0927 10:34:48.443563    4705 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:48.443574    4705 start.go:729] Will try again in 5 seconds ...
	I0927 10:34:53.445552    4705 start.go:360] acquireMachinesLock for offline-docker-614000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:34:53.445654    4705 start.go:364] duration metric: took 76.5µs to acquireMachinesLock for "offline-docker-614000"
	I0927 10:34:53.445689    4705 start.go:93] Provisioning new machine with config: &{Name:offline-docker-614000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:34:53.445742    4705 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:34:53.453940    4705 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:34:53.469477    4705 start.go:159] libmachine.API.Create for "offline-docker-614000" (driver="qemu2")
	I0927 10:34:53.469506    4705 client.go:168] LocalClient.Create starting
	I0927 10:34:53.469571    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:34:53.469607    4705 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:53.469617    4705 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:53.469656    4705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:34:53.469681    4705 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:53.469687    4705 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:53.469989    4705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:34:53.625862    4705 main.go:141] libmachine: Creating SSH key...
	I0927 10:34:53.682860    4705 main.go:141] libmachine: Creating Disk image...
	I0927 10:34:53.682872    4705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:34:53.683087    4705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2
	I0927 10:34:53.692453    4705 main.go:141] libmachine: STDOUT: 
	I0927 10:34:53.692477    4705 main.go:141] libmachine: STDERR: 
	I0927 10:34:53.692536    4705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2 +20000M
	I0927 10:34:53.700618    4705 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:34:53.700634    4705 main.go:141] libmachine: STDERR: 
	I0927 10:34:53.700645    4705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2
	I0927 10:34:53.700651    4705 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:34:53.700659    4705 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:34:53.700684    4705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:e4:f7:8f:cc:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/offline-docker-614000/disk.qcow2
	I0927 10:34:53.702289    4705 main.go:141] libmachine: STDOUT: 
	I0927 10:34:53.702301    4705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:34:53.702314    4705 client.go:171] duration metric: took 232.811583ms to LocalClient.Create
	I0927 10:34:55.704477    4705 start.go:128] duration metric: took 2.25876375s to createHost
	I0927 10:34:55.704541    4705 start.go:83] releasing machines lock for "offline-docker-614000", held for 2.258933917s
	W0927 10:34:55.704900    4705 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-614000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-614000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:55.723654    4705 out.go:201] 
	W0927 10:34:55.737601    4705 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:34:55.737647    4705 out.go:270] * 
	* 
	W0927 10:34:55.740106    4705 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:34:55.757595    4705 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-614000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-27 10:34:55.77207 -0700 PDT m=+2397.147266543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-614000 -n offline-docker-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-614000 -n offline-docker-614000: exit status 7 (68.653333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-614000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-614000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-614000
--- FAIL: TestOffline (9.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.460625ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7cz7s" [244f365b-caba-42ac-9269-727d7fcfef8d] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004373375s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tn6h5" [2314f77d-9c59-460e-a0dc-812866fd625b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004432167s
addons_test.go:338: (dbg) Run:  kubectl --context addons-289000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-289000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-289000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.061785709s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-289000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 ip
2024/09/27 10:08:44 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-289000 -n addons-289000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-196000 | jenkins | v1.34.0 | 27 Sep 24 09:54 PDT |                     |
	|         | -p download-only-196000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| delete  | -p download-only-196000                                                                     | download-only-196000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| start   | -o=json --download-only                                                                     | download-only-992000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT |                     |
	|         | -p download-only-992000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| delete  | -p download-only-992000                                                                     | download-only-992000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| delete  | -p download-only-196000                                                                     | download-only-196000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| delete  | -p download-only-992000                                                                     | download-only-992000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-750000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT |                     |
	|         | binary-mirror-750000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-750000                                                                     | binary-mirror-750000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| addons  | enable dashboard -p                                                                         | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT |                     |
	|         | addons-289000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT |                     |
	|         | addons-289000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-289000 --wait=true                                                                | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:58 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-289000 addons disable                                                                | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 09:59 PDT | 27 Sep 24 09:59 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:07 PDT | 27 Sep 24 10:07 PDT |
	|         | -p addons-289000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-289000 addons disable                                                                | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:07 PDT | 27 Sep 24 10:07 PDT |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-289000 addons disable                                                                | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:07 PDT | 27 Sep 24 10:08 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:08 PDT | 27 Sep 24 10:08 PDT |
	|         | -p addons-289000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-289000 ssh cat                                                                       | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:08 PDT | 27 Sep 24 10:08 PDT |
	|         | /opt/local-path-provisioner/pvc-f084ad39-b9a4-43f9-bfcc-54549c24f9b6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-289000 addons disable                                                                | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:08 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-289000 ip                                                                            | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:08 PDT | 27 Sep 24 10:08 PDT |
	| addons  | addons-289000 addons disable                                                                | addons-289000        | jenkins | v1.34.0 | 27 Sep 24 10:08 PDT | 27 Sep 24 10:08 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 09:55:35
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 09:55:35.946719    2127 out.go:345] Setting OutFile to fd 1 ...
	I0927 09:55:35.946854    2127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:55:35.946857    2127 out.go:358] Setting ErrFile to fd 2...
	I0927 09:55:35.946860    2127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:55:35.946996    2127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 09:55:35.948087    2127 out.go:352] Setting JSON to false
	I0927 09:55:35.964241    2127 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1499,"bootTime":1727454636,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 09:55:35.964309    2127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 09:55:35.968949    2127 out.go:177] * [addons-289000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 09:55:35.975904    2127 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 09:55:35.975958    2127 notify.go:220] Checking for updates...
	I0927 09:55:35.982851    2127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 09:55:35.985865    2127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 09:55:35.988883    2127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 09:55:35.991828    2127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 09:55:35.994855    2127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 09:55:35.998153    2127 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 09:55:36.002831    2127 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 09:55:36.009914    2127 start.go:297] selected driver: qemu2
	I0927 09:55:36.009921    2127 start.go:901] validating driver "qemu2" against <nil>
	I0927 09:55:36.009928    2127 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 09:55:36.012195    2127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 09:55:36.015858    2127 out.go:177] * Automatically selected the socket_vmnet network
	I0927 09:55:36.018936    2127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 09:55:36.018967    2127 cni.go:84] Creating CNI manager for ""
	I0927 09:55:36.018992    2127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 09:55:36.018996    2127 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 09:55:36.019023    2127 start.go:340] cluster config:
	{Name:addons-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 09:55:36.022733    2127 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 09:55:36.031851    2127 out.go:177] * Starting "addons-289000" primary control-plane node in "addons-289000" cluster
	I0927 09:55:36.035927    2127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 09:55:36.035940    2127 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 09:55:36.035949    2127 cache.go:56] Caching tarball of preloaded images
	I0927 09:55:36.036012    2127 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 09:55:36.036018    2127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 09:55:36.036263    2127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/config.json ...
	I0927 09:55:36.036279    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/config.json: {Name:mk922f6e76ebef651848f193fa67976f922d4f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:55:36.036680    2127 start.go:360] acquireMachinesLock for addons-289000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 09:55:36.036744    2127 start.go:364] duration metric: took 58.292µs to acquireMachinesLock for "addons-289000"
	I0927 09:55:36.036755    2127 start.go:93] Provisioning new machine with config: &{Name:addons-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 09:55:36.036793    2127 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 09:55:36.044848    2127 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0927 09:55:36.277619    2127 start.go:159] libmachine.API.Create for "addons-289000" (driver="qemu2")
	I0927 09:55:36.277662    2127 client.go:168] LocalClient.Create starting
	I0927 09:55:36.277856    2127 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 09:55:36.374769    2127 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 09:55:36.443037    2127 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 09:55:36.643534    2127 main.go:141] libmachine: Creating SSH key...
	I0927 09:55:36.846730    2127 main.go:141] libmachine: Creating Disk image...
	I0927 09:55:36.846740    2127 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 09:55:36.847015    2127 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/disk.qcow2
	I0927 09:55:36.866682    2127 main.go:141] libmachine: STDOUT: 
	I0927 09:55:36.866710    2127 main.go:141] libmachine: STDERR: 
	I0927 09:55:36.866795    2127 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/disk.qcow2 +20000M
	I0927 09:55:36.874943    2127 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 09:55:36.874957    2127 main.go:141] libmachine: STDERR: 
	I0927 09:55:36.874973    2127 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/disk.qcow2
	I0927 09:55:36.874980    2127 main.go:141] libmachine: Starting QEMU VM...
	I0927 09:55:36.875019    2127 qemu.go:418] Using hvf for hardware acceleration
	I0927 09:55:36.875049    2127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:84:e7:d8:22:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/disk.qcow2
	I0927 09:55:36.933759    2127 main.go:141] libmachine: STDOUT: 
	I0927 09:55:36.933799    2127 main.go:141] libmachine: STDERR: 
	I0927 09:55:36.933804    2127 main.go:141] libmachine: Attempt 0
	I0927 09:55:36.933815    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:36.933877    2127 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0927 09:55:36.933901    2127 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f834d4}
	I0927 09:55:38.936065    2127 main.go:141] libmachine: Attempt 1
	I0927 09:55:38.936187    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:38.936471    2127 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0927 09:55:38.936525    2127 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f834d4}
	I0927 09:55:40.938715    2127 main.go:141] libmachine: Attempt 2
	I0927 09:55:40.938793    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:40.939121    2127 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0927 09:55:40.939172    2127 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f834d4}
	I0927 09:55:42.941324    2127 main.go:141] libmachine: Attempt 3
	I0927 09:55:42.941360    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:42.941431    2127 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0927 09:55:42.941455    2127 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f834d4}
	I0927 09:55:44.943526    2127 main.go:141] libmachine: Attempt 4
	I0927 09:55:44.943560    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:44.943645    2127 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0927 09:55:44.943659    2127 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f834d4}
	I0927 09:55:46.945694    2127 main.go:141] libmachine: Attempt 5
	I0927 09:55:46.945712    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:46.945748    2127 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0927 09:55:46.945756    2127 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f834d4}
	I0927 09:55:48.947775    2127 main.go:141] libmachine: Attempt 6
	I0927 09:55:48.947797    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:48.947892    2127 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0927 09:55:48.947902    2127 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f834d4}
	I0927 09:55:50.948612    2127 main.go:141] libmachine: Attempt 7
	I0927 09:55:50.948767    2127 main.go:141] libmachine: Searching for 56:84:e7:d8:22:8b in /var/db/dhcpd_leases ...
	I0927 09:55:50.948922    2127 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0927 09:55:50.948934    2127 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:84:e7:d8:22:8b ID:1,56:84:e7:d8:22:8b Lease:0x66f83515}
	I0927 09:55:50.948940    2127 main.go:141] libmachine: Found match: 56:84:e7:d8:22:8b
	I0927 09:55:50.948947    2127 main.go:141] libmachine: IP: 192.168.105.2
	I0927 09:55:50.948962    2127 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0927 09:55:52.969022    2127 machine.go:93] provisionDockerMachine start ...
	I0927 09:55:52.970496    2127 main.go:141] libmachine: Using SSH client type: native
	I0927 09:55:52.970929    2127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f9c00] 0x1010fc440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0927 09:55:52.970946    2127 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 09:55:53.040036    2127 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 09:55:53.040067    2127 buildroot.go:166] provisioning hostname "addons-289000"
	I0927 09:55:53.040222    2127 main.go:141] libmachine: Using SSH client type: native
	I0927 09:55:53.040467    2127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f9c00] 0x1010fc440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0927 09:55:53.040477    2127 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-289000 && echo "addons-289000" | sudo tee /etc/hostname
	I0927 09:55:53.102426    2127 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-289000
	
	I0927 09:55:53.102533    2127 main.go:141] libmachine: Using SSH client type: native
	I0927 09:55:53.102689    2127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f9c00] 0x1010fc440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0927 09:55:53.102705    2127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-289000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-289000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-289000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 09:55:53.155288    2127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 09:55:53.155301    2127 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19712-1508/.minikube CaCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19712-1508/.minikube}
	I0927 09:55:53.155309    2127 buildroot.go:174] setting up certificates
	I0927 09:55:53.155321    2127 provision.go:84] configureAuth start
	I0927 09:55:53.155326    2127 provision.go:143] copyHostCerts
	I0927 09:55:53.155410    2127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem (1078 bytes)
	I0927 09:55:53.155676    2127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem (1123 bytes)
	I0927 09:55:53.155795    2127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem (1679 bytes)
	I0927 09:55:53.155893    2127 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem org=jenkins.addons-289000 san=[127.0.0.1 192.168.105.2 addons-289000 localhost minikube]
	I0927 09:55:53.567887    2127 provision.go:177] copyRemoteCerts
	I0927 09:55:53.567965    2127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 09:55:53.567985    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:55:53.593735    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 09:55:53.602177    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 09:55:53.610518    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 09:55:53.618669    2127 provision.go:87] duration metric: took 463.337291ms to configureAuth
	I0927 09:55:53.618678    2127 buildroot.go:189] setting minikube options for container-runtime
	I0927 09:55:53.618786    2127 config.go:182] Loaded profile config "addons-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 09:55:53.618839    2127 main.go:141] libmachine: Using SSH client type: native
	I0927 09:55:53.618934    2127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f9c00] 0x1010fc440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0927 09:55:53.618939    2127 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 09:55:53.666013    2127 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0927 09:55:53.666021    2127 buildroot.go:70] root file system type: tmpfs
	I0927 09:55:53.666072    2127 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 09:55:53.666114    2127 main.go:141] libmachine: Using SSH client type: native
	I0927 09:55:53.666221    2127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f9c00] 0x1010fc440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0927 09:55:53.666254    2127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 09:55:53.715242    2127 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 09:55:53.715306    2127 main.go:141] libmachine: Using SSH client type: native
	I0927 09:55:53.715411    2127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f9c00] 0x1010fc440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0927 09:55:53.715419    2127 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 09:55:55.070665    2127 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0927 09:55:55.070679    2127 machine.go:96] duration metric: took 2.101658334s to provisionDockerMachine
	I0927 09:55:55.070685    2127 client.go:171] duration metric: took 18.793328458s to LocalClient.Create
	I0927 09:55:55.070696    2127 start.go:167] duration metric: took 18.793392333s to libmachine.API.Create "addons-289000"
	I0927 09:55:55.070701    2127 start.go:293] postStartSetup for "addons-289000" (driver="qemu2")
	I0927 09:55:55.070707    2127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 09:55:55.070792    2127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 09:55:55.070802    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:55:55.095935    2127 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 09:55:55.097525    2127 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 09:55:55.097537    2127 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/addons for local assets ...
	I0927 09:55:55.097641    2127 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/files for local assets ...
	I0927 09:55:55.097676    2127 start.go:296] duration metric: took 26.972958ms for postStartSetup
	I0927 09:55:55.098076    2127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/config.json ...
	I0927 09:55:55.098258    2127 start.go:128] duration metric: took 19.061775584s to createHost
	I0927 09:55:55.098292    2127 main.go:141] libmachine: Using SSH client type: native
	I0927 09:55:55.098382    2127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010f9c00] 0x1010fc440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0927 09:55:55.098387    2127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 09:55:55.142376    2127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727456155.277411961
	
	I0927 09:55:55.142385    2127 fix.go:216] guest clock: 1727456155.277411961
	I0927 09:55:55.142393    2127 fix.go:229] Guest: 2024-09-27 09:55:55.277411961 -0700 PDT Remote: 2024-09-27 09:55:55.098261 -0700 PDT m=+19.170566917 (delta=179.150961ms)
	I0927 09:55:55.142404    2127 fix.go:200] guest clock delta is within tolerance: 179.150961ms
	I0927 09:55:55.142407    2127 start.go:83] releasing machines lock for "addons-289000", held for 19.105973375s
	I0927 09:55:55.142732    2127 ssh_runner.go:195] Run: cat /version.json
	I0927 09:55:55.142737    2127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 09:55:55.142741    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:55:55.142756    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:55:55.165874    2127 ssh_runner.go:195] Run: systemctl --version
	I0927 09:55:55.168278    2127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 09:55:55.212730    2127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 09:55:55.212784    2127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 09:55:55.219146    2127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 09:55:55.219156    2127 start.go:495] detecting cgroup driver to use...
	I0927 09:55:55.219294    2127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 09:55:55.225757    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 09:55:55.229557    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 09:55:55.233449    2127 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 09:55:55.233475    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 09:55:55.237476    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 09:55:55.241348    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 09:55:55.245366    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 09:55:55.249394    2127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 09:55:55.253357    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 09:55:55.257275    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 09:55:55.261248    2127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 09:55:55.265295    2127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 09:55:55.269233    2127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 09:55:55.269259    2127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 09:55:55.276004    2127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 09:55:55.279600    2127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 09:55:55.350292    2127 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 09:55:55.360894    2127 start.go:495] detecting cgroup driver to use...
	I0927 09:55:55.360993    2127 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 09:55:55.367538    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 09:55:55.377660    2127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 09:55:55.385440    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 09:55:55.390946    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 09:55:55.396009    2127 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 09:55:55.433352    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 09:55:55.439512    2127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 09:55:55.445845    2127 ssh_runner.go:195] Run: which cri-dockerd
	I0927 09:55:55.447275    2127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 09:55:55.450245    2127 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0927 09:55:55.456042    2127 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 09:55:55.534560    2127 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 09:55:55.605342    2127 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 09:55:55.605401    2127 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 09:55:55.611547    2127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 09:55:55.689936    2127 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 09:55:57.882414    2127 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.192496959s)
	I0927 09:55:57.882490    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 09:55:57.888033    2127 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0927 09:55:57.894626    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 09:55:57.899962    2127 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 09:55:57.964953    2127 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 09:55:58.054764    2127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 09:55:58.134400    2127 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 09:55:58.140869    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 09:55:58.146431    2127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 09:55:58.219455    2127 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 09:55:58.243778    2127 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 09:55:58.243882    2127 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 09:55:58.246057    2127 start.go:563] Will wait 60s for crictl version
	I0927 09:55:58.246096    2127 ssh_runner.go:195] Run: which crictl
	I0927 09:55:58.247565    2127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 09:55:58.270400    2127 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0927 09:55:58.270483    2127 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 09:55:58.284491    2127 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 09:55:58.305509    2127 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0927 09:55:58.305679    2127 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0927 09:55:58.307263    2127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 09:55:58.311541    2127 kubeadm.go:883] updating cluster {Name:addons-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 09:55:58.311589    2127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 09:55:58.311641    2127 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 09:55:58.316743    2127 docker.go:685] Got preloaded images: 
	I0927 09:55:58.316753    2127 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0927 09:55:58.316792    2127 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 09:55:58.320845    2127 ssh_runner.go:195] Run: which lz4
	I0927 09:55:58.322297    2127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 09:55:58.323802    2127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 09:55:58.323814    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0927 09:55:59.569481    2127 docker.go:649] duration metric: took 1.247254875s to copy over tarball
	I0927 09:55:59.569545    2127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 09:56:00.538942    2127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 09:56:00.553751    2127 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 09:56:00.557535    2127 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0927 09:56:00.563433    2127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 09:56:00.633861    2127 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 09:56:02.835857    2127 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.202016666s)
	I0927 09:56:02.835953    2127 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 09:56:02.842440    2127 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 09:56:02.842450    2127 cache_images.go:84] Images are preloaded, skipping loading
	I0927 09:56:02.842456    2127 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0927 09:56:02.842516    2127 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-289000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 09:56:02.842591    2127 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 09:56:02.861909    2127 cni.go:84] Creating CNI manager for ""
	I0927 09:56:02.861923    2127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 09:56:02.861928    2127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 09:56:02.861938    2127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-289000 NodeName:addons-289000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 09:56:02.862003    2127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-289000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 09:56:02.862073    2127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 09:56:02.865830    2127 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 09:56:02.865861    2127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 09:56:02.869369    2127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 09:56:02.875581    2127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 09:56:02.881417    2127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0927 09:56:02.887516    2127 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0927 09:56:02.888889    2127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 09:56:02.893186    2127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 09:56:02.965814    2127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 09:56:02.979161    2127 certs.go:68] Setting up /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000 for IP: 192.168.105.2
	I0927 09:56:02.979186    2127 certs.go:194] generating shared ca certs ...
	I0927 09:56:02.979196    2127 certs.go:226] acquiring lock for ca certs: {Name:mk0418f7d8f4c252d010b1c431fe702739668245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:02.979385    2127 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key
	I0927 09:56:03.146892    2127 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt ...
	I0927 09:56:03.146906    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt: {Name:mk0d178c131966cd0992dc5f8ab6376365b6fc68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.147274    2127 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key ...
	I0927 09:56:03.147278    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key: {Name:mk394bfcc195eeb1f23623cc755c1b0f24caa4a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.147422    2127 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key
	I0927 09:56:03.195630    2127 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.crt ...
	I0927 09:56:03.195634    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.crt: {Name:mke7cb52765c883aaba1336d89c93e441eb6d1f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.195795    2127 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key ...
	I0927 09:56:03.195798    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key: {Name:mk7edd114c409cba3dd0701228833c3f84707e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.195925    2127 certs.go:256] generating profile certs ...
	I0927 09:56:03.195964    2127 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.key
	I0927 09:56:03.195973    2127 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt with IP's: []
	I0927 09:56:03.293276    2127 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt ...
	I0927 09:56:03.293283    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: {Name:mk3e08e869465ee9b8259104a072e106d155a219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.293477    2127 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.key ...
	I0927 09:56:03.293480    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.key: {Name:mkeba68f71d4c6439386c4ba43252dd1a4652941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.293589    2127 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.key.577c9b55
	I0927 09:56:03.293599    2127 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.crt.577c9b55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0927 09:56:03.465079    2127 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.crt.577c9b55 ...
	I0927 09:56:03.465084    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.crt.577c9b55: {Name:mk8c999556ff7b5d3d9a20aacb2ed9e05bad7f4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.465248    2127 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.key.577c9b55 ...
	I0927 09:56:03.465252    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.key.577c9b55: {Name:mk8437f78345fe2b7e3bf305ad61fc402d217919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.465364    2127 certs.go:381] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.crt.577c9b55 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.crt
	I0927 09:56:03.465729    2127 certs.go:385] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.key.577c9b55 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.key
	I0927 09:56:03.465868    2127 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.key
	I0927 09:56:03.465881    2127 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.crt with IP's: []
	I0927 09:56:03.705322    2127 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.crt ...
	I0927 09:56:03.705337    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.crt: {Name:mk4828a3249557a152f8ac6b6bd1ebf5cc5f8f25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.705594    2127 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.key ...
	I0927 09:56:03.705598    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.key: {Name:mk0fa18c548745b92f86e1cb6576860d9f91005d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:03.705858    2127 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 09:56:03.705884    2127 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem (1078 bytes)
	I0927 09:56:03.705902    2127 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem (1123 bytes)
	I0927 09:56:03.705919    2127 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem (1679 bytes)
	I0927 09:56:03.706288    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 09:56:03.716590    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 09:56:03.727186    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 09:56:03.735542    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 09:56:03.744907    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 09:56:03.753134    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 09:56:03.761401    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 09:56:03.769517    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 09:56:03.777608    2127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 09:56:03.785790    2127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 09:56:03.792548    2127 ssh_runner.go:195] Run: openssl version
	I0927 09:56:03.794711    2127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 09:56:03.798253    2127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 09:56:03.799886    2127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0927 09:56:03.799917    2127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 09:56:03.801915    2127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 09:56:03.805739    2127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 09:56:03.807163    2127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 09:56:03.807206    2127 kubeadm.go:392] StartCluster: {Name:addons-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 09:56:03.807287    2127 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 09:56:03.812900    2127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 09:56:03.816854    2127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 09:56:03.820685    2127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 09:56:03.824265    2127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 09:56:03.824272    2127 kubeadm.go:157] found existing configuration files:
	
	I0927 09:56:03.824297    2127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 09:56:03.827745    2127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 09:56:03.827775    2127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 09:56:03.831118    2127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 09:56:03.834285    2127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 09:56:03.834309    2127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 09:56:03.837579    2127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 09:56:03.840969    2127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 09:56:03.840994    2127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 09:56:03.844620    2127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 09:56:03.848281    2127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 09:56:03.848307    2127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 09:56:03.851888    2127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 09:56:03.873070    2127 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 09:56:03.873111    2127 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 09:56:03.911665    2127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 09:56:03.911720    2127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 09:56:03.911776    2127 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 09:56:03.916210    2127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 09:56:03.932439    2127 out.go:235]   - Generating certificates and keys ...
	I0927 09:56:03.932475    2127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 09:56:03.932505    2127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 09:56:04.082360    2127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 09:56:04.124743    2127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 09:56:04.383264    2127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 09:56:04.427287    2127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 09:56:04.505339    2127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 09:56:04.505403    2127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-289000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0927 09:56:04.589928    2127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 09:56:04.594606    2127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-289000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0927 09:56:04.694508    2127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 09:56:04.821686    2127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 09:56:04.961083    2127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 09:56:04.961118    2127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 09:56:04.998220    2127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 09:56:05.133759    2127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 09:56:05.222045    2127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 09:56:05.362081    2127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 09:56:05.466358    2127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 09:56:05.466698    2127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 09:56:05.467826    2127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 09:56:05.473049    2127 out.go:235]   - Booting up control plane ...
	I0927 09:56:05.473096    2127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 09:56:05.473131    2127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 09:56:05.473167    2127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 09:56:05.475762    2127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 09:56:05.478557    2127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 09:56:05.478580    2127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 09:56:05.560890    2127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 09:56:05.560992    2127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 09:56:06.064204    2127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.410833ms
	I0927 09:56:06.064273    2127 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 09:56:09.568259    2127 kubeadm.go:310] [api-check] The API server is healthy after 3.503361085s
	I0927 09:56:09.595675    2127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 09:56:09.613633    2127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 09:56:09.627429    2127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 09:56:09.627613    2127 kubeadm.go:310] [mark-control-plane] Marking the node addons-289000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 09:56:09.632327    2127 kubeadm.go:310] [bootstrap-token] Using token: b24t98.ux8tssfblz86j74k
	I0927 09:56:09.636576    2127 out.go:235]   - Configuring RBAC rules ...
	I0927 09:56:09.636651    2127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 09:56:09.643544    2127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 09:56:09.646417    2127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 09:56:09.647934    2127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 09:56:09.649202    2127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 09:56:09.650291    2127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 09:56:09.977462    2127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 09:56:10.383311    2127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 09:56:10.974774    2127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 09:56:10.975305    2127 kubeadm.go:310] 
	I0927 09:56:10.975350    2127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 09:56:10.975360    2127 kubeadm.go:310] 
	I0927 09:56:10.975428    2127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 09:56:10.975436    2127 kubeadm.go:310] 
	I0927 09:56:10.975455    2127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 09:56:10.975497    2127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 09:56:10.975546    2127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 09:56:10.975552    2127 kubeadm.go:310] 
	I0927 09:56:10.975585    2127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 09:56:10.975591    2127 kubeadm.go:310] 
	I0927 09:56:10.975629    2127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 09:56:10.975636    2127 kubeadm.go:310] 
	I0927 09:56:10.975669    2127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 09:56:10.975726    2127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 09:56:10.975776    2127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 09:56:10.975781    2127 kubeadm.go:310] 
	I0927 09:56:10.975836    2127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 09:56:10.975892    2127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 09:56:10.975897    2127 kubeadm.go:310] 
	I0927 09:56:10.975958    2127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b24t98.ux8tssfblz86j74k \
	I0927 09:56:10.976036    2127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 \
	I0927 09:56:10.976052    2127 kubeadm.go:310] 	--control-plane 
	I0927 09:56:10.976058    2127 kubeadm.go:310] 
	I0927 09:56:10.976109    2127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 09:56:10.976114    2127 kubeadm.go:310] 
	I0927 09:56:10.976164    2127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b24t98.ux8tssfblz86j74k \
	I0927 09:56:10.976241    2127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 
	I0927 09:56:10.976446    2127 kubeadm.go:310] W0927 16:56:04.006868    1600 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 09:56:10.976641    2127 kubeadm.go:310] W0927 16:56:04.007187    1600 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 09:56:10.976710    2127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 09:56:10.976722    2127 cni.go:84] Creating CNI manager for ""
	I0927 09:56:10.976733    2127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 09:56:10.981050    2127 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 09:56:10.989166    2127 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 09:56:10.993592    2127 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 09:56:11.000780    2127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 09:56:11.000835    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:11.000893    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-289000 minikube.k8s.io/updated_at=2024_09_27T09_56_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=addons-289000 minikube.k8s.io/primary=true
	I0927 09:56:11.056792    2127 ops.go:34] apiserver oom_adj: -16
	I0927 09:56:11.056854    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:11.558920    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:12.057644    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:12.558978    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:13.058974    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:13.558944    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:14.059144    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:14.558907    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:15.058978    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:15.558827    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:16.058834    2127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 09:56:16.107669    2127 kubeadm.go:1113] duration metric: took 5.106966833s to wait for elevateKubeSystemPrivileges
	I0927 09:56:16.107683    2127 kubeadm.go:394] duration metric: took 12.30068175s to StartCluster
	I0927 09:56:16.107692    2127 settings.go:142] acquiring lock: {Name:mk58fc55a93399a03fb1c9ac710554db41068524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:16.107842    2127 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 09:56:16.108014    2127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:56:16.108225    2127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 09:56:16.108255    2127 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 09:56:16.108265    2127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 09:56:16.108303    2127 addons.go:69] Setting yakd=true in profile "addons-289000"
	I0927 09:56:16.108311    2127 addons.go:234] Setting addon yakd=true in "addons-289000"
	I0927 09:56:16.108322    2127 config.go:182] Loaded profile config "addons-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 09:56:16.108334    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108337    2127 addons.go:69] Setting storage-provisioner=true in profile "addons-289000"
	I0927 09:56:16.108343    2127 addons.go:234] Setting addon storage-provisioner=true in "addons-289000"
	I0927 09:56:16.108349    2127 addons.go:69] Setting volcano=true in profile "addons-289000"
	I0927 09:56:16.108355    2127 addons.go:234] Setting addon volcano=true in "addons-289000"
	I0927 09:56:16.108360    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108364    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108417    2127 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-289000"
	I0927 09:56:16.108425    2127 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-289000"
	I0927 09:56:16.108428    2127 addons.go:69] Setting inspektor-gadget=true in profile "addons-289000"
	I0927 09:56:16.108528    2127 addons.go:234] Setting addon inspektor-gadget=true in "addons-289000"
	I0927 09:56:16.108598    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108468    2127 addons.go:69] Setting default-storageclass=true in profile "addons-289000"
	I0927 09:56:16.108617    2127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-289000"
	I0927 09:56:16.108619    2127 retry.go:31] will retry after 764.926789ms: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.108626    2127 retry.go:31] will retry after 598.377214ms: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.108480    2127 addons.go:69] Setting cloud-spanner=true in profile "addons-289000"
	I0927 09:56:16.108634    2127 addons.go:234] Setting addon cloud-spanner=true in "addons-289000"
	I0927 09:56:16.108640    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108470    2127 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-289000"
	I0927 09:56:16.108729    2127 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-289000"
	I0927 09:56:16.108753    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108907    2127 retry.go:31] will retry after 973.745573ms: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.108485    2127 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-289000"
	I0927 09:56:16.108985    2127 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-289000"
	I0927 09:56:16.108996    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108487    2127 addons.go:69] Setting metrics-server=true in profile "addons-289000"
	I0927 09:56:16.108979    2127 retry.go:31] will retry after 1.028406531s: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.108494    2127 addons.go:69] Setting ingress=true in profile "addons-289000"
	I0927 09:56:16.109012    2127 addons.go:234] Setting addon ingress=true in "addons-289000"
	I0927 09:56:16.109012    2127 addons.go:234] Setting addon metrics-server=true in "addons-289000"
	I0927 09:56:16.109024    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.109035    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.109102    2127 retry.go:31] will retry after 561.231557ms: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.108494    2127 addons.go:69] Setting ingress-dns=true in profile "addons-289000"
	I0927 09:56:16.109111    2127 addons.go:234] Setting addon ingress-dns=true in "addons-289000"
	I0927 09:56:16.109120    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.108490    2127 addons.go:69] Setting registry=true in profile "addons-289000"
	I0927 09:56:16.109185    2127 addons.go:234] Setting addon registry=true in "addons-289000"
	I0927 09:56:16.109205    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.109227    2127 retry.go:31] will retry after 899.15447ms: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.109228    2127 retry.go:31] will retry after 1.303075145s: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.108498    2127 addons.go:69] Setting gcp-auth=true in profile "addons-289000"
	I0927 09:56:16.109318    2127 retry.go:31] will retry after 1.408353408s: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.109324    2127 mustload.go:65] Loading cluster: addons-289000
	I0927 09:56:16.108498    2127 addons.go:69] Setting volumesnapshots=true in profile "addons-289000"
	I0927 09:56:16.109347    2127 addons.go:234] Setting addon volumesnapshots=true in "addons-289000"
	I0927 09:56:16.109367    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.109423    2127 retry.go:31] will retry after 945.969897ms: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.109472    2127 retry.go:31] will retry after 1.308348512s: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.109496    2127 config.go:182] Loaded profile config "addons-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 09:56:16.109681    2127 retry.go:31] will retry after 1.436839024s: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.109966    2127 retry.go:31] will retry after 1.300960409s: connect: dial unix /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/monitor: connect: connection refused
	I0927 09:56:16.110807    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.110816    2127 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-289000"
	I0927 09:56:16.111954    2127 out.go:177] * Verifying Kubernetes components...
	I0927 09:56:16.112317    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:16.117898    2127 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 09:56:16.123978    2127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 09:56:16.127898    2127 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 09:56:16.133173    2127 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 09:56:16.139809    2127 out.go:177]   - Using image docker.io/busybox:stable
	I0927 09:56:16.145878    2127 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 09:56:16.145964    2127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 09:56:16.146030    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 09:56:16.146040    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:16.152274    2127 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 09:56:16.152282    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 09:56:16.152290    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:16.194234    2127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 09:56:16.242161    2127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 09:56:16.353949    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 09:56:16.368554    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 09:56:16.494774    2127 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0927 09:56:16.495312    2127 node_ready.go:35] waiting up to 6m0s for node "addons-289000" to be "Ready" ...
	I0927 09:56:16.497659    2127 node_ready.go:49] node "addons-289000" has status "Ready":"True"
	I0927 09:56:16.497680    2127 node_ready.go:38] duration metric: took 2.34825ms for node "addons-289000" to be "Ready" ...
	I0927 09:56:16.497683    2127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 09:56:16.506322    2127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:16.676273    2127 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 09:56:16.679227    2127 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 09:56:16.679234    2127 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 09:56:16.679242    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:16.713148    2127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 09:56:16.716206    2127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 09:56:16.716214    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 09:56:16.716224    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:16.773100    2127 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 09:56:16.773113    2127 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 09:56:16.804921    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 09:56:16.806024    2127 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 09:56:16.806031    2127 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 09:56:16.879331    2127 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 09:56:16.882285    2127 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 09:56:16.882296    2127 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 09:56:16.882307    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:16.882549    2127 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 09:56:16.882554    2127 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 09:56:16.919728    2127 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 09:56:16.919742    2127 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 09:56:16.926125    2127 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 09:56:16.926136    2127 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 09:56:16.938745    2127 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0927 09:56:16.938758    2127 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0927 09:56:16.949480    2127 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 09:56:16.949492    2127 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 09:56:16.969782    2127 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 09:56:16.969797    2127 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 09:56:16.979153    2127 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 09:56:16.979163    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0927 09:56:16.998693    2127 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 09:56:16.998699    2127 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-289000" context rescaled to 1 replicas
	I0927 09:56:16.998705    2127 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 09:56:17.012442    2127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 09:56:17.021629    2127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 09:56:17.026197    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 09:56:17.026238    2127 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 09:56:17.026242    2127 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 09:56:17.028573    2127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 09:56:17.034673    2127 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 09:56:17.034683    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 09:56:17.034693    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.035958    2127 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 09:56:17.035964    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 09:56:17.058631    2127 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 09:56:17.062557    2127 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 09:56:17.062567    2127 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 09:56:17.062577    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.062849    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 09:56:17.085470    2127 addons.go:234] Setting addon default-storageclass=true in "addons-289000"
	I0927 09:56:17.085493    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:17.086122    2127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 09:56:17.086128    2127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 09:56:17.086133    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.104952    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 09:56:17.142590    2127 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 09:56:17.146625    2127 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 09:56:17.146634    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 09:56:17.146643    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.158856    2127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 09:56:17.158867    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 09:56:17.172388    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 09:56:17.219472    2127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 09:56:17.219485    2127 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 09:56:17.300716    2127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 09:56:17.300728    2127 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 09:56:17.346180    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 09:56:17.410255    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 09:56:17.416717    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 09:56:17.420615    2127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 09:56:17.420626    2127 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 09:56:17.420638    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.423609    2127 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0927 09:56:17.427666    2127 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 09:56:17.428826    2127 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 09:56:17.428831    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 09:56:17.428841    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.433642    2127 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 09:56:17.437684    2127 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 09:56:17.437694    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 09:56:17.437705    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.522984    2127 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 09:56:17.527198    2127 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 09:56:17.527212    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 09:56:17.527223    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.549924    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 09:56:17.552953    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 09:56:17.556915    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 09:56:17.566899    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 09:56:17.572893    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 09:56:17.576927    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 09:56:17.580814    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 09:56:17.584881    2127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 09:56:17.588981    2127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 09:56:17.588990    2127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 09:56:17.589001    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:17.626814    2127 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 09:56:17.626826    2127 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 09:56:17.681294    2127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 09:56:17.681307    2127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 09:56:17.685065    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 09:56:17.744252    2127 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 09:56:17.744263    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 09:56:17.793579    2127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 09:56:17.793592    2127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 09:56:17.817713    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 09:56:17.827495    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 09:56:17.836194    2127 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 09:56:17.836208    2127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 09:56:17.868042    2127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 09:56:17.868056    2127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 09:56:17.879365    2127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 09:56:17.879377    2127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 09:56:17.921989    2127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 09:56:17.922006    2127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 09:56:17.923228    2127 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 09:56:17.923234    2127 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 09:56:17.958079    2127 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 09:56:17.958093    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 09:56:18.019495    2127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 09:56:18.019508    2127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 09:56:18.097417    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 09:56:18.225844    2127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 09:56:18.225859    2127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 09:56:18.232200    2127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 09:56:18.232209    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 09:56:18.246624    2127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 09:56:18.246636    2127 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 09:56:18.269825    2127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 09:56:18.269836    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 09:56:18.325108    2127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 09:56:18.325119    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 09:56:18.393712    2127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 09:56:18.393725    2127 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 09:56:18.444945    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 09:56:18.539143    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:20.379164    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.010658291s)
	I0927 09:56:20.379184    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.574309417s)
	I0927 09:56:20.379227    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.353073667s)
	I0927 09:56:20.379282    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.316477334s)
	I0927 09:56:20.379337    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.206987625s)
	I0927 09:56:20.379323    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.274416375s)
	I0927 09:56:20.379442    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.6944155s)
	I0927 09:56:20.379446    2127 addons.go:475] Verifying addon ingress=true in "addons-289000"
	I0927 09:56:20.379402    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.033259625s)
	I0927 09:56:20.379473    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.552010958s)
	I0927 09:56:20.379482    2127 addons.go:475] Verifying addon registry=true in "addons-289000"
	I0927 09:56:20.379503    2127 addons.go:475] Verifying addon metrics-server=true in "addons-289000"
	I0927 09:56:20.379464    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.561777625s)
	I0927 09:56:20.379430    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.9692105s)
	I0927 09:56:20.379516    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.282125833s)
	W0927 09:56:20.389443    2127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 09:56:20.389459    2127 retry.go:31] will retry after 335.781755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 09:56:20.379631    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.02573725s)
	I0927 09:56:20.385209    2127 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-289000 service yakd-dashboard -n yakd-dashboard
	
	I0927 09:56:20.392970    2127 out.go:177] * Verifying registry addon...
	I0927 09:56:20.400173    2127 out.go:177] * Verifying ingress addon...
	I0927 09:56:20.408705    2127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 09:56:20.411523    2127 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 09:56:20.460812    2127 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 09:56:20.460823    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:20.460989    2127 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 09:56:20.460995    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0927 09:56:20.470132    2127 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 09:56:20.719021    2127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.274093625s)
	I0927 09:56:20.719039    2127 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-289000"
	I0927 09:56:20.723161    2127 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 09:56:20.726244    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 09:56:20.730539    2127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 09:56:20.733577    2127 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 09:56:20.733590    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:20.918239    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:20.918423    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:21.011490    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:21.235391    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:21.413562    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:21.414184    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:21.735386    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:21.913875    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:21.914007    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:22.234987    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:22.412638    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:22.413740    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:22.735083    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:22.912653    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:22.913558    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:23.118197    2127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 09:56:23.118212    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:23.146560    2127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 09:56:23.155709    2127 addons.go:234] Setting addon gcp-auth=true in "addons-289000"
	I0927 09:56:23.155735    2127 host.go:66] Checking if "addons-289000" exists ...
	I0927 09:56:23.156462    2127 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 09:56:23.156470    2127 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/addons-289000/id_rsa Username:docker}
	I0927 09:56:23.198259    2127 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 09:56:23.213606    2127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 09:56:23.226830    2127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 09:56:23.226841    2127 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 09:56:23.233447    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:23.234152    2127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 09:56:23.234160    2127 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 09:56:23.240400    2127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 09:56:23.240407    2127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 09:56:23.246529    2127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 09:56:23.413408    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:23.413408    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:23.472143    2127 addons.go:475] Verifying addon gcp-auth=true in "addons-289000"
	I0927 09:56:23.475085    2127 out.go:177] * Verifying gcp-auth addon...
	I0927 09:56:23.485542    2127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 09:56:23.510368    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:23.512799    2127 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 09:56:23.735235    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:23.912707    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:23.914174    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:24.235004    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:24.412698    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:24.413724    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:24.735091    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:24.944443    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:24.944594    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:25.234678    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:25.412494    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:25.413568    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:25.513536    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:25.734976    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:25.912729    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:25.913438    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:26.234933    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:26.412538    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:26.413431    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:26.735003    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:26.910928    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:26.913412    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:27.010720    2127 pod_ready.go:98] pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:26 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[{IP:192.168.105
.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-27 09:56:16 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 09:56:16 -0700 PDT,FinishedAt:2024-09-27 09:56:26 -0700 PDT,ContainerID:docker://cf0b3eafcc6f1a59bdfd0313908ed8a0118a0cb9821e0e743ed67f3127958796,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://cf0b3eafcc6f1a59bdfd0313908ed8a0118a0cb9821e0e743ed67f3127958796 Started:0x14001449fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x14000624760} {Name:kube-api-access-spsmf MountPath:/var/run/secrets/kubernetes.io/serviceacc
ount ReadOnly:true RecursiveReadOnly:0x14000624770}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 09:56:27.010736    2127 pod_ready.go:82] duration metric: took 10.504569917s for pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace to be "Ready" ...
	E0927 09:56:27.010741    2127 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-fhdwf" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:26 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 09:56:16 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.1
05.2 HostIPs:[{IP:192.168.105.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-27 09:56:16 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 09:56:16 -0700 PDT,FinishedAt:2024-09-27 09:56:26 -0700 PDT,ContainerID:docker://cf0b3eafcc6f1a59bdfd0313908ed8a0118a0cb9821e0e743ed67f3127958796,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://cf0b3eafcc6f1a59bdfd0313908ed8a0118a0cb9821e0e743ed67f3127958796 Started:0x14001449fd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x14000624760} {Name:kube-api-access-spsmf MountPath:/var/run/sec
rets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x14000624770}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 09:56:27.010747    2127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:27.234601    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:27.413511    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:27.414688    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:27.734869    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:27.912379    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:27.913232    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:28.234435    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:28.410686    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:28.413343    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:28.735195    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:28.912501    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:28.913458    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:29.015071    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:29.234748    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:29.412377    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:29.413992    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:29.735514    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:29.912745    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:29.913897    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:30.234714    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:30.412519    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:30.413639    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:30.735911    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:30.913863    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:30.914049    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:31.234831    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:31.412405    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:31.413213    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:31.515340    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:31.734842    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:31.912458    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:31.913452    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:32.234481    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:32.412766    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:32.413702    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:32.734630    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:32.912569    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:32.913719    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:33.234664    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:33.411840    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:33.412931    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:33.734797    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:33.912518    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:33.913124    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:34.015077    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:34.233097    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:34.412006    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:34.413144    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:34.736240    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:34.913278    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:34.913939    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:35.236937    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:35.412611    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:35.413668    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:35.734704    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:35.913949    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:35.914075    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:36.232860    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:36.412605    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:36.413671    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:36.515430    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:36.733811    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:36.912316    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:36.913469    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:37.234643    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:37.412358    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:37.413230    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:37.734643    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:37.920098    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:37.920425    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:38.234240    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:38.412464    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:38.414246    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:38.735060    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:38.911621    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:38.913440    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:39.015439    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:39.234348    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:39.411911    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:39.413351    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:39.735764    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:39.912168    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:39.913014    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:40.234245    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:40.412109    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:40.412986    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:40.734126    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:40.912182    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:40.913173    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:41.015608    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:41.234458    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:41.413336    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:41.414144    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:41.734952    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:41.912857    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:41.915505    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:42.234439    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:42.412197    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:42.413223    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:42.734657    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:42.912312    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:42.913364    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:43.234468    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:43.412381    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:43.413353    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:43.513351    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:43.734601    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:43.921969    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:43.922043    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:44.234496    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:44.412854    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:44.413329    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:44.735493    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:44.914309    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:44.914395    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:45.233868    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:45.412159    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:45.413032    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:45.514822    2127 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"False"
	I0927 09:56:45.734283    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:45.914947    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:45.915075    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:46.237549    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:46.412833    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:46.413985    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:46.737627    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:46.912748    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:46.913648    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:47.234204    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:47.411212    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:47.412722    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:47.734723    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:47.910321    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:47.913036    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:48.014905    2127 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace has status "Ready":"True"
	I0927 09:56:48.014914    2127 pod_ready.go:82] duration metric: took 21.004509417s for pod "coredns-7c65d6cfc9-nfg5r" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.014919    2127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.017056    2127 pod_ready.go:93] pod "etcd-addons-289000" in "kube-system" namespace has status "Ready":"True"
	I0927 09:56:48.017061    2127 pod_ready.go:82] duration metric: took 2.1385ms for pod "etcd-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.017064    2127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.019040    2127 pod_ready.go:93] pod "kube-apiserver-addons-289000" in "kube-system" namespace has status "Ready":"True"
	I0927 09:56:48.019045    2127 pod_ready.go:82] duration metric: took 1.978875ms for pod "kube-apiserver-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.019049    2127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.021006    2127 pod_ready.go:93] pod "kube-controller-manager-addons-289000" in "kube-system" namespace has status "Ready":"True"
	I0927 09:56:48.021014    2127 pod_ready.go:82] duration metric: took 1.962083ms for pod "kube-controller-manager-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.021018    2127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7zh6h" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.023146    2127 pod_ready.go:93] pod "kube-proxy-7zh6h" in "kube-system" namespace has status "Ready":"True"
	I0927 09:56:48.023153    2127 pod_ready.go:82] duration metric: took 2.131625ms for pod "kube-proxy-7zh6h" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.023156    2127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.234566    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:48.412155    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:48.412947    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:48.413639    2127 pod_ready.go:93] pod "kube-scheduler-addons-289000" in "kube-system" namespace has status "Ready":"True"
	I0927 09:56:48.413645    2127 pod_ready.go:82] duration metric: took 390.49275ms for pod "kube-scheduler-addons-289000" in "kube-system" namespace to be "Ready" ...
	I0927 09:56:48.413649    2127 pod_ready.go:39] duration metric: took 31.916484542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 09:56:48.413658    2127 api_server.go:52] waiting for apiserver process to appear ...
	I0927 09:56:48.413736    2127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 09:56:48.420777    2127 api_server.go:72] duration metric: took 32.313037208s to wait for apiserver process to appear ...
	I0927 09:56:48.420786    2127 api_server.go:88] waiting for apiserver healthz status ...
	I0927 09:56:48.420796    2127 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0927 09:56:48.424067    2127 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0927 09:56:48.424632    2127 api_server.go:141] control plane version: v1.31.1
	I0927 09:56:48.424641    2127 api_server.go:131] duration metric: took 3.851375ms to wait for apiserver health ...
	I0927 09:56:48.424644    2127 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 09:56:48.619604    2127 system_pods.go:59] 17 kube-system pods found
	I0927 09:56:48.619617    2127 system_pods.go:61] "coredns-7c65d6cfc9-nfg5r" [b0ea0a55-c0fe-4efb-b3d0-edb2ba73a1a3] Running
	I0927 09:56:48.619620    2127 system_pods.go:61] "csi-hostpath-attacher-0" [63220c0c-eb49-4b93-b325-ddba47860a92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 09:56:48.619624    2127 system_pods.go:61] "csi-hostpath-resizer-0" [d0b9cbfe-08b6-4c1c-916a-35f4a011eb62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 09:56:48.619627    2127 system_pods.go:61] "csi-hostpathplugin-5m5tr" [3f67c205-54e6-4765-94dc-4020ad9ea7a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 09:56:48.619630    2127 system_pods.go:61] "etcd-addons-289000" [75435348-82e4-430a-8887-7a439d3fac44] Running
	I0927 09:56:48.619632    2127 system_pods.go:61] "kube-apiserver-addons-289000" [1e5a114d-b568-4cfb-9f12-1584a9c86499] Running
	I0927 09:56:48.619634    2127 system_pods.go:61] "kube-controller-manager-addons-289000" [eeab4b65-808b-4a19-b1b3-a1dc52c3bdcc] Running
	I0927 09:56:48.619639    2127 system_pods.go:61] "kube-ingress-dns-minikube" [a62feeb8-290a-44c0-a319-ed74ac338428] Running
	I0927 09:56:48.619641    2127 system_pods.go:61] "kube-proxy-7zh6h" [c51a4726-ebd4-4532-a6ee-2ff0aa472d5c] Running
	I0927 09:56:48.619643    2127 system_pods.go:61] "kube-scheduler-addons-289000" [d3ed5ee5-be28-4ebb-bd3a-1bbdab00ac33] Running
	I0927 09:56:48.619645    2127 system_pods.go:61] "metrics-server-84c5f94fbc-tvxxb" [d00dfe12-41ce-4d1d-bddb-977193e314d9] Running
	I0927 09:56:48.619647    2127 system_pods.go:61] "nvidia-device-plugin-daemonset-xt8td" [08fa369e-90fd-4647-80fc-7b8e9368fb62] Running
	I0927 09:56:48.619649    2127 system_pods.go:61] "registry-66c9cd494c-7cz7s" [244f365b-caba-42ac-9269-727d7fcfef8d] Running
	I0927 09:56:48.619651    2127 system_pods.go:61] "registry-proxy-tn6h5" [2314f77d-9c59-460e-a0dc-812866fd625b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 09:56:48.619655    2127 system_pods.go:61] "snapshot-controller-56fcc65765-czhc9" [5a699399-6e1a-407c-a255-3f7a60aeafad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 09:56:48.619658    2127 system_pods.go:61] "snapshot-controller-56fcc65765-hzfwj" [79352a63-46a0-49dd-93b3-e42c2f0215ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 09:56:48.619660    2127 system_pods.go:61] "storage-provisioner" [6e68b5fd-8361-4e14-8e48-21b036be33f8] Running
	I0927 09:56:48.619663    2127 system_pods.go:74] duration metric: took 195.019583ms to wait for pod list to return data ...
	I0927 09:56:48.619667    2127 default_sa.go:34] waiting for default service account to be created ...
	I0927 09:56:48.734313    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:48.816046    2127 default_sa.go:45] found service account: "default"
	I0927 09:56:48.816057    2127 default_sa.go:55] duration metric: took 196.390084ms for default service account to be created ...
	I0927 09:56:48.816061    2127 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 09:56:48.912088    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:48.912989    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:49.018027    2127 system_pods.go:86] 17 kube-system pods found
	I0927 09:56:49.018040    2127 system_pods.go:89] "coredns-7c65d6cfc9-nfg5r" [b0ea0a55-c0fe-4efb-b3d0-edb2ba73a1a3] Running
	I0927 09:56:49.018045    2127 system_pods.go:89] "csi-hostpath-attacher-0" [63220c0c-eb49-4b93-b325-ddba47860a92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 09:56:49.018049    2127 system_pods.go:89] "csi-hostpath-resizer-0" [d0b9cbfe-08b6-4c1c-916a-35f4a011eb62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 09:56:49.018052    2127 system_pods.go:89] "csi-hostpathplugin-5m5tr" [3f67c205-54e6-4765-94dc-4020ad9ea7a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 09:56:49.018054    2127 system_pods.go:89] "etcd-addons-289000" [75435348-82e4-430a-8887-7a439d3fac44] Running
	I0927 09:56:49.018056    2127 system_pods.go:89] "kube-apiserver-addons-289000" [1e5a114d-b568-4cfb-9f12-1584a9c86499] Running
	I0927 09:56:49.018058    2127 system_pods.go:89] "kube-controller-manager-addons-289000" [eeab4b65-808b-4a19-b1b3-a1dc52c3bdcc] Running
	I0927 09:56:49.018060    2127 system_pods.go:89] "kube-ingress-dns-minikube" [a62feeb8-290a-44c0-a319-ed74ac338428] Running
	I0927 09:56:49.018062    2127 system_pods.go:89] "kube-proxy-7zh6h" [c51a4726-ebd4-4532-a6ee-2ff0aa472d5c] Running
	I0927 09:56:49.018064    2127 system_pods.go:89] "kube-scheduler-addons-289000" [d3ed5ee5-be28-4ebb-bd3a-1bbdab00ac33] Running
	I0927 09:56:49.018066    2127 system_pods.go:89] "metrics-server-84c5f94fbc-tvxxb" [d00dfe12-41ce-4d1d-bddb-977193e314d9] Running
	I0927 09:56:49.018067    2127 system_pods.go:89] "nvidia-device-plugin-daemonset-xt8td" [08fa369e-90fd-4647-80fc-7b8e9368fb62] Running
	I0927 09:56:49.018069    2127 system_pods.go:89] "registry-66c9cd494c-7cz7s" [244f365b-caba-42ac-9269-727d7fcfef8d] Running
	I0927 09:56:49.018071    2127 system_pods.go:89] "registry-proxy-tn6h5" [2314f77d-9c59-460e-a0dc-812866fd625b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 09:56:49.018074    2127 system_pods.go:89] "snapshot-controller-56fcc65765-czhc9" [5a699399-6e1a-407c-a255-3f7a60aeafad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 09:56:49.018078    2127 system_pods.go:89] "snapshot-controller-56fcc65765-hzfwj" [79352a63-46a0-49dd-93b3-e42c2f0215ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 09:56:49.018088    2127 system_pods.go:89] "storage-provisioner" [6e68b5fd-8361-4e14-8e48-21b036be33f8] Running
	I0927 09:56:49.018094    2127 system_pods.go:126] duration metric: took 202.033ms to wait for k8s-apps to be running ...
	I0927 09:56:49.018098    2127 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 09:56:49.018154    2127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 09:56:49.024193    2127 system_svc.go:56] duration metric: took 6.092417ms WaitForService to wait for kubelet
	I0927 09:56:49.024202    2127 kubeadm.go:582] duration metric: took 32.916475041s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 09:56:49.024210    2127 node_conditions.go:102] verifying NodePressure condition ...
	I0927 09:56:49.214874    2127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 09:56:49.214884    2127 node_conditions.go:123] node cpu capacity is 2
	I0927 09:56:49.214889    2127 node_conditions.go:105] duration metric: took 190.679834ms to run NodePressure ...
	I0927 09:56:49.214895    2127 start.go:241] waiting for startup goroutines ...
	I0927 09:56:49.233934    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:49.412000    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:49.413347    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:49.735112    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:49.936031    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:49.936182    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:50.234419    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:50.412098    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 09:56:50.412883    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:50.735094    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:50.911947    2127 kapi.go:107] duration metric: took 30.503747209s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 09:56:50.912986    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:51.234121    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:51.415086    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:51.734614    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:51.915334    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:52.234323    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:52.415755    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:52.734524    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:52.915754    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:53.237241    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:53.418191    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:53.735783    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:53.915930    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:54.234273    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:54.413693    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:54.734402    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:54.915238    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:55.234400    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:55.414950    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:55.738779    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:55.914272    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:56.234255    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:56.413832    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:56.734275    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:56.915075    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:57.236524    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:57.437052    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:57.751045    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:57.914986    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:58.235227    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:58.416072    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:58.734129    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:58.914986    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:59.234300    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:59.444907    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:56:59.735165    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:56:59.916248    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:00.235397    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:00.414909    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:00.734978    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:00.915875    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:01.233093    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:01.415052    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:01.734360    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:01.913537    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:02.234021    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:02.415190    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:02.734314    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:02.914860    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:03.234893    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:03.415037    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:03.734202    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:03.915005    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:04.234267    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:04.414821    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:04.734167    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:04.914640    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:05.233114    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:05.415007    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:05.734394    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:05.915146    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:06.234282    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:06.415283    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:06.734423    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:06.915214    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:07.232895    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:07.414846    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:07.734154    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:07.914729    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:08.234238    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:08.416148    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:08.734180    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:08.914371    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:09.234061    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:09.414612    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:09.734775    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:09.915325    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:10.235031    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:10.414950    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:10.733596    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:10.915357    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:11.234586    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:11.423795    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:11.734391    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:11.914976    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:12.235309    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:12.414731    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:12.734201    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:12.915496    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:13.234197    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:13.459866    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:13.734129    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:13.914927    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:14.235561    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:14.415047    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:14.777868    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:14.914728    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:15.233091    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:15.414761    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:15.734244    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:15.935222    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:16.234377    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:16.413873    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:16.732731    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:16.914958    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:17.233939    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:17.414810    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:17.733687    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:17.914617    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:18.261372    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:18.415235    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:18.738935    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 09:57:18.915984    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:19.234239    2127 kapi.go:107] duration metric: took 58.504661708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 09:57:19.414484    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:19.917282    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:20.415558    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:20.919527    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:21.417028    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:21.921689    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:22.417526    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:22.917216    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:23.419241    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:23.919640    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:24.414379    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:24.914369    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:25.416488    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:25.914543    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:26.415120    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:26.914750    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:27.414384    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:27.914420    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:28.415478    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:28.914410    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:29.461276    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:29.914794    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:30.414836    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:30.915375    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:31.414671    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:31.915051    2127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 09:57:32.414863    2127 kapi.go:107] duration metric: took 1m12.004525042s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 09:57:45.504254    2127 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 09:57:45.504266    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:45.991095    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:46.488697    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:46.994070    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:47.494834    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:47.995789    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:48.488727    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:48.990190    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:49.486372    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:49.989187    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:50.488313    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:50.989433    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:51.487848    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:51.989434    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:52.488726    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:52.988657    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:53.488619    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:53.989567    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:54.488149    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:54.991566    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:55.487971    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:55.993931    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:56.488114    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:56.992528    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:57.488047    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:57.995394    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:58.488188    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:58.992651    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:59.488859    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:57:59.988732    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:00.487995    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:00.989380    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:01.487625    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:01.992847    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:02.488063    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:02.988927    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:03.488058    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:03.990147    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:04.493556    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:04.994981    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:05.489338    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:05.995152    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:06.489091    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:06.992206    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:07.492670    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:07.992995    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:08.488910    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:08.991769    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:09.487916    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:09.992319    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:10.488055    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:10.988129    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:11.488787    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:11.992465    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:12.488831    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:12.989609    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:13.491518    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:13.992599    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:14.486976    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:14.992499    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:15.488131    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:15.989051    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:16.489160    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:16.988645    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:17.488242    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:17.988064    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:18.488077    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:18.989226    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:19.488001    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:19.988680    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:20.487515    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:20.987829    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:21.488078    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:21.989853    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:22.488708    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:22.996127    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:23.487317    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:23.987969    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:24.487496    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:24.992761    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:25.488426    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:25.992456    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:26.488893    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:26.987439    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:27.487635    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:27.989054    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:28.487400    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:28.987148    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:29.487263    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:29.987380    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:30.487388    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:30.988249    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:31.491185    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:31.993964    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:32.487517    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:32.987628    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:33.487961    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:33.985738    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:34.487346    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:34.989742    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:35.488740    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:35.991145    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:36.487857    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:36.988517    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:37.491104    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:37.990836    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:38.487846    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:38.989976    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:39.487311    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:39.988971    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:40.487015    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:40.987823    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:41.488038    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:41.991644    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:42.486874    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:42.993349    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:43.490313    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:43.994185    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:44.487158    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:44.989341    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:45.491223    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:45.993620    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:46.487948    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:46.993649    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:47.487713    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:47.993439    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:48.485710    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:48.987723    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:49.486650    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:49.987322    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:50.486997    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:50.986809    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:51.486801    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:51.986967    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:52.486677    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:52.987189    2127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 09:58:53.489179    2127 kapi.go:107] duration metric: took 2m30.006105291s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 09:58:53.493402    2127 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-289000 cluster.
	I0927 09:58:53.498357    2127 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 09:58:53.502356    2127 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 09:58:53.508418    2127 out.go:177] * Enabled addons: storage-provisioner, inspektor-gadget, nvidia-device-plugin, metrics-server, ingress-dns, cloud-spanner, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0927 09:58:53.512337    2127 addons.go:510] duration metric: took 2m37.406666625s for enable addons: enabled=[storage-provisioner inspektor-gadget nvidia-device-plugin metrics-server ingress-dns cloud-spanner volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0927 09:58:53.512352    2127 start.go:246] waiting for cluster config update ...
	I0927 09:58:53.512363    2127 start.go:255] writing updated cluster config ...
	I0927 09:58:53.512861    2127 ssh_runner.go:195] Run: rm -f paused
	I0927 09:58:53.665348    2127 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0927 09:58:53.669230    2127 out.go:201] 
	W0927 09:58:53.673402    2127 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0927 09:58:53.677310    2127 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0927 09:58:53.685332    2127 out.go:177] * Done! kubectl is now configured to use "addons-289000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 17:08:16 addons-289000 dockerd[1296]: time="2024-09-27T17:08:16.558974589Z" level=warning msg="cleaning up after shim disconnected" id=33cd6a29fd73c1d6a649009aec0f85d6b27664214c50dd1337e552a2d292ebc4 namespace=moby
	Sep 27 17:08:16 addons-289000 dockerd[1296]: time="2024-09-27T17:08:16.558979297Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:08:18 addons-289000 dockerd[1288]: time="2024-09-27T17:08:18.557195854Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ef39139b02313f1a traceID=699eac4163874a6f316fb0a75a068e40
	Sep 27 17:08:18 addons-289000 dockerd[1288]: time="2024-09-27T17:08:18.558749133Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ef39139b02313f1a traceID=699eac4163874a6f316fb0a75a068e40
	Sep 27 17:08:44 addons-289000 dockerd[1288]: time="2024-09-27T17:08:44.101370877Z" level=info msg="ignoring event" container=2b065852add2acfb937035db8e8771f822a266a57d8ff97cd2ca707f846dbfee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.101699323Z" level=info msg="shim disconnected" id=2b065852add2acfb937035db8e8771f822a266a57d8ff97cd2ca707f846dbfee namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.101726030Z" level=warning msg="cleaning up after shim disconnected" id=2b065852add2acfb937035db8e8771f822a266a57d8ff97cd2ca707f846dbfee namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.101729988Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1288]: time="2024-09-27T17:08:44.222008968Z" level=info msg="ignoring event" container=a63279d17949dce3e69e29419c8c07bc6023ee8410ca9bb23663f516e7b18cee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.226965523Z" level=info msg="shim disconnected" id=a63279d17949dce3e69e29419c8c07bc6023ee8410ca9bb23663f516e7b18cee namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.227094602Z" level=warning msg="cleaning up after shim disconnected" id=a63279d17949dce3e69e29419c8c07bc6023ee8410ca9bb23663f516e7b18cee namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.227138558Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.236106456Z" level=warning msg="cleanup warnings time=\"2024-09-27T17:08:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.259735193Z" level=info msg="shim disconnected" id=145b3978810ea6c4145d7bcd3669cc7a722c64315d3d6e4aba3a23255c78303f namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.259834940Z" level=warning msg="cleaning up after shim disconnected" id=145b3978810ea6c4145d7bcd3669cc7a722c64315d3d6e4aba3a23255c78303f namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.259845564Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1288]: time="2024-09-27T17:08:44.260059806Z" level=info msg="ignoring event" container=145b3978810ea6c4145d7bcd3669cc7a722c64315d3d6e4aba3a23255c78303f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.317637001Z" level=info msg="shim disconnected" id=63933097fda72c678bfe1f7e7765779bccd1fb168bbbe5f51449c0bdf3f7f3dc namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1288]: time="2024-09-27T17:08:44.317995986Z" level=info msg="ignoring event" container=63933097fda72c678bfe1f7e7765779bccd1fb168bbbe5f51449c0bdf3f7f3dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.318049609Z" level=warning msg="cleaning up after shim disconnected" id=63933097fda72c678bfe1f7e7765779bccd1fb168bbbe5f51449c0bdf3f7f3dc namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.318168230Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.343799722Z" level=info msg="shim disconnected" id=f4b44390b6e2ecc3071fd883357477e62a0d67ffeb3f22b4c7ffea253ad16ca9 namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.343941800Z" level=warning msg="cleaning up after shim disconnected" id=f4b44390b6e2ecc3071fd883357477e62a0d67ffeb3f22b4c7ffea253ad16ca9 namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1296]: time="2024-09-27T17:08:44.343961424Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:08:44 addons-289000 dockerd[1288]: time="2024-09-27T17:08:44.343923509Z" level=info msg="ignoring event" container=f4b44390b6e2ecc3071fd883357477e62a0d67ffeb3f22b4c7ffea253ad16ca9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	3a5c78bb99f4a       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                                              29 seconds ago      Exited              busybox                                  0                   33cd6a29fd73c       test-local-path
	c9eebc19b22e0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   5b6a76bc759ad       gcp-auth-89d5ffd79-npb7b
	a13e3c8c0c2b7       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   de978e522902f       ingress-nginx-controller-bc57996ff-8xlht
	caf598aae8df5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   72c702c0a3c23       csi-hostpathplugin-5m5tr
	899c756a9e9ef       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   72c702c0a3c23       csi-hostpathplugin-5m5tr
	fc44e0d280e07       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   72c702c0a3c23       csi-hostpathplugin-5m5tr
	08ec9bb217901       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   72c702c0a3c23       csi-hostpathplugin-5m5tr
	b80c7ff42eb26       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   72c702c0a3c23       csi-hostpathplugin-5m5tr
	77a2d897b392f       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   91685d3de0f75       csi-hostpath-attacher-0
	2afe3fd759441       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   ccef79d8385b7       csi-hostpath-resizer-0
	2740d5a603f6f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   72c702c0a3c23       csi-hostpathplugin-5m5tr
	e0d8252958a66       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   9a42d38207d5f       snapshot-controller-56fcc65765-czhc9
	7b1ef454a87fb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   ab31fd723eed5       snapshot-controller-56fcc65765-hzfwj
	ab7201de0544d       420193b27261a                                                                                                                                11 minutes ago      Exited              patch                                    1                   0bdfc437cb546       ingress-nginx-admission-patch-w8blc
	85b80efad2c1e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   da973bda4e6f4       ingress-nginx-admission-create-jnnbg
	145b3978810ea       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   f4b44390b6e2e       registry-proxy-tn6h5
	a63279d17949d       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   63933097fda72       registry-66c9cd494c-7cz7s
	76e9463ee58e2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            12 minutes ago      Running             gadget                                   0                   6d09683626387       gadget-8pdp8
	1729a5232bb48       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   d5b0ace96cc7b       kube-ingress-dns-minikube
	824fbde6bc355       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               12 minutes ago      Running             cloud-spanner-emulator                   0                   6aa266444bf5d       cloud-spanner-emulator-5b584cc74-h4rtk
	d8ac81e73be86       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   5470290468bec       metrics-server-84c5f94fbc-tvxxb
	b9dc1c9cf1da8       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   38d5d29e4ca12       local-path-provisioner-86d989889c-rnlfm
	6bfc4ccb39f37       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   403bef5f79f2d       storage-provisioner
	474371ee24fed       2f6c962e7b831                                                                                                                                12 minutes ago      Running             coredns                                  0                   31287fc60cb5a       coredns-7c65d6cfc9-nfg5r
	e9cdc16efbcbb       24a140c548c07                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   81511e3893030       kube-proxy-7zh6h
	048d3906e84ed       279f381cb3736                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   b1c87717f023a       kube-controller-manager-addons-289000
	8908b68edee51       d3f53a98c0a9d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   072be2d015005       kube-apiserver-addons-289000
	d7e8b31fd6c9c       7f8aa378bb47d                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   8502a25c94c13       kube-scheduler-addons-289000
	b5b2de9047660       27e3830e14027                                                                                                                                12 minutes ago      Running             etcd                                     0                   6d80f5d7a16bb       etcd-addons-289000
	
	
	==> controller_ingress [a13e3c8c0c2b] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0927 16:57:31.530266       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0927 16:57:31.530352       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0927 16:57:31.533233       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0927 16:57:31.657523       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0927 16:57:31.663838       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0927 16:57:31.667753       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0927 16:57:31.670685       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5ddece54-9992-4e66-8bc9-cb5b205c123f", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0927 16:57:31.674390       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0f9d2282-efec-4f61-822c-9a578be8a5b9", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0927 16:57:31.674640       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f105d924-3494-4455-a4bb-b7d7562ec7e5", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0927 16:57:32.870071       7 nginx.go:317] "Starting NGINX process"
	I0927 16:57:32.870202       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0927 16:57:32.870527       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0927 16:57:32.871632       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0927 16:57:32.882153       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0927 16:57:32.882645       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-8xlht"
	I0927 16:57:32.885547       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-8xlht" node="addons-289000"
	I0927 16:57:32.898349       7 controller.go:213] "Backend successfully reloaded"
	I0927 16:57:32.898427       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0927 16:57:32.898476       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-8xlht", UID:"31223c0b-b170-42b5-b688-d514365545b6", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [474371ee24fe] <==
	[INFO] 127.0.0.1:53561 - 52502 "HINFO IN 2690366392587942660.6722042218134552352. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004650793s
	[INFO] 10.244.0.10:34443 - 63601 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000125516s
	[INFO] 10.244.0.10:34443 - 55411 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000247824s
	[INFO] 10.244.0.10:38618 - 1171 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003016s
	[INFO] 10.244.0.10:38618 - 54419 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000437s
	[INFO] 10.244.0.10:35403 - 39300 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030368s
	[INFO] 10.244.0.10:35403 - 18565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028619s
	[INFO] 10.244.0.10:50019 - 51636 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050739s
	[INFO] 10.244.0.10:50019 - 42677 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043991s
	[INFO] 10.244.0.10:34643 - 65174 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030868s
	[INFO] 10.244.0.10:34643 - 7831 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00001308s
	[INFO] 10.244.0.10:51660 - 38751 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000016913s
	[INFO] 10.244.0.10:51660 - 27742 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000023371s
	[INFO] 10.244.0.10:34099 - 6225 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000011289s
	[INFO] 10.244.0.10:34099 - 51281 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006532s
	[INFO] 10.244.0.10:44221 - 10944 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00001154s
	[INFO] 10.244.0.10:44221 - 49345 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000017205s
	[INFO] 10.244.0.25:40497 - 6314 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001731884s
	[INFO] 10.244.0.25:59022 - 63926 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001784958s
	[INFO] 10.244.0.25:53610 - 29796 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000043785s
	[INFO] 10.244.0.25:47488 - 56928 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00004566s
	[INFO] 10.244.0.25:44089 - 33921 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000025496s
	[INFO] 10.244.0.25:41523 - 28858 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064281s
	[INFO] 10.244.0.25:55733 - 24729 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001922228s
	[INFO] 10.244.0.25:56106 - 64383 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001986092s
	
	
	==> describe nodes <==
	Name:               addons-289000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-289000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=addons-289000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T09_56_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-289000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-289000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 16:56:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-289000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:08:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:08:16 +0000   Fri, 27 Sep 2024 16:56:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:08:16 +0000   Fri, 27 Sep 2024 16:56:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:08:16 +0000   Fri, 27 Sep 2024 16:56:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:08:16 +0000   Fri, 27 Sep 2024 16:56:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-289000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 6da4bc90955b428688c7aa371d708a09
	  System UUID:                6da4bc90955b428688c7aa371d708a09
	  Boot ID:                    d49429c0-a689-4ebc-a965-b399f1ee7e4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-5b584cc74-h4rtk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-8pdp8                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-npb7b                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8xlht    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-nfg5r                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-5m5tr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-289000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-289000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-289000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7zh6h                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-289000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-tvxxb             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-czhc9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-hzfwj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-rnlfm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-289000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-289000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-289000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-289000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-289000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-289000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-289000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-289000 event: Registered Node addons-289000 in Controller
	
	
	==> dmesg <==
	[  +2.588176] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	[  +0.474167] kauditd_printk_skb: 92 callbacks suppressed
	[  +4.048225] systemd-fstab-generator[2055]: Ignoring "noauto" option for root device
	[  +6.134934] systemd-fstab-generator[2201]: Ignoring "noauto" option for root device
	[  +0.030502] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.037070] kauditd_printk_skb: 310 callbacks suppressed
	[  +5.127226] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.763400] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.198869] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.183882] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.133172] kauditd_printk_skb: 19 callbacks suppressed
	[Sep27 16:57] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.775167] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.295911] kauditd_printk_skb: 12 callbacks suppressed
	[ +15.618019] kauditd_printk_skb: 29 callbacks suppressed
	[Sep27 16:58] kauditd_printk_skb: 38 callbacks suppressed
	[Sep27 16:59] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.562989] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.684132] kauditd_printk_skb: 17 callbacks suppressed
	[ +23.898856] kauditd_printk_skb: 3 callbacks suppressed
	[Sep27 17:07] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.330202] kauditd_printk_skb: 14 callbacks suppressed
	[Sep27 17:08] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.700307] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.441517] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [b5b2de904766] <==
	{"level":"info","ts":"2024-09-27T16:56:06.698222Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-27T16:56:06.698603Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-27T16:56:07.497840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T16:56:07.497873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T16:56:07.497909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-27T16:56:07.497927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:07.497935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:07.497944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:07.497953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-27T16:56:07.504992Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:07.506046Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-289000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T16:56:07.506102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:07.506243Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:07.506264Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T16:56:07.506108Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T16:56:07.506274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T16:56:07.506348Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T16:56:07.506114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T16:56:07.506821Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T16:56:07.510076Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T16:56:07.510298Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-27T16:56:07.514035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T17:06:07.555358Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1862}
	{"level":"info","ts":"2024-09-27T17:06:07.644330Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1862,"took":"86.401766ms","hash":2673626126,"current-db-size-bytes":9064448,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4829184,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-27T17:06:07.644816Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2673626126,"revision":1862,"compact-revision":-1}
	
	
	==> gcp-auth [c9eebc19b22e] <==
	2024/09/27 16:58:52 GCP Auth Webhook started!
	2024/09/27 16:59:08 Ready to marshal response ...
	2024/09/27 16:59:08 Ready to write response ...
	2024/09/27 16:59:09 Ready to marshal response ...
	2024/09/27 16:59:09 Ready to write response ...
	2024/09/27 16:59:32 Ready to marshal response ...
	2024/09/27 16:59:32 Ready to write response ...
	2024/09/27 16:59:32 Ready to marshal response ...
	2024/09/27 16:59:32 Ready to write response ...
	2024/09/27 16:59:32 Ready to marshal response ...
	2024/09/27 16:59:32 Ready to write response ...
	2024/09/27 17:07:34 Ready to marshal response ...
	2024/09/27 17:07:34 Ready to write response ...
	2024/09/27 17:07:34 Ready to marshal response ...
	2024/09/27 17:07:34 Ready to write response ...
	2024/09/27 17:07:34 Ready to marshal response ...
	2024/09/27 17:07:34 Ready to write response ...
	2024/09/27 17:07:43 Ready to marshal response ...
	2024/09/27 17:07:43 Ready to write response ...
	2024/09/27 17:08:08 Ready to marshal response ...
	2024/09/27 17:08:08 Ready to write response ...
	2024/09/27 17:08:08 Ready to marshal response ...
	2024/09/27 17:08:08 Ready to write response ...
	2024/09/27 17:08:17 Ready to marshal response ...
	2024/09/27 17:08:17 Ready to write response ...
	
	
	==> kernel <==
	 17:08:44 up 12 min,  0 users,  load average: 0.74, 0.66, 0.42
	Linux addons-289000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8908b68edee5] <==
	I0927 16:59:22.314236       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0927 16:59:22.446217       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 16:59:22.458207       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 16:59:22.468237       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0927 16:59:22.637155       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 16:59:22.717262       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	E0927 16:59:22.738230       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0927 16:59:22.742058       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0927 16:59:22.743074       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0927 16:59:22.744362       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0927 16:59:22.748095       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="11.686198ms" method="PUT" path="/apis/scheduling.volcano.sh/v1beta1/queues/test/status" result=null
	I0927 16:59:22.749162       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 16:59:22.800863       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0927 16:59:23.382332       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0927 16:59:23.667347       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0927 16:59:23.717288       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0927 16:59:23.749935       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0927 16:59:23.795573       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0927 16:59:23.800996       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0927 16:59:23.888821       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0927 17:07:34.137860       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.156.7"}
	E0927 17:08:18.367724       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0927 17:08:18.387253       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0927 17:08:18.396419       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0927 17:08:33.403779       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [048d3906e84e] <==
	E0927 17:07:55.169712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 17:07:55.429264       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0927 17:07:56.175555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:07:56.175609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:07:56.357575       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:07:56.357627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 17:07:56.638433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="1.958µs"
	W0927 17:07:56.899694       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:07:56.899723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 17:08:06.688978       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0927 17:08:15.547829       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:08:15.547862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:08:15.816286       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:08:15.816374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 17:08:16.034247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-289000"
	I0927 17:08:17.586873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="1.999µs"
	W0927 17:08:28.225943       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:08:28.226026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:08:34.897702       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:08:34.897764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:08:35.368553       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:08:35.368594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 17:08:44.116101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 17:08:44.116123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 17:08:44.190010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.667µs"
	
	
	==> kube-proxy [e9cdc16efbcb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 16:56:16.690989       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 16:56:16.704918       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0927 16:56:16.705035       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 16:56:16.762571       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 16:56:16.762588       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 16:56:16.762602       1 server_linux.go:169] "Using iptables Proxier"
	I0927 16:56:16.763483       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 16:56:16.763682       1 server.go:483] "Version info" version="v1.31.1"
	I0927 16:56:16.763688       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 16:56:16.764408       1 config.go:199] "Starting service config controller"
	I0927 16:56:16.764428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 16:56:16.764463       1 config.go:105] "Starting endpoint slice config controller"
	I0927 16:56:16.764466       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 16:56:16.767728       1 config.go:328] "Starting node config controller"
	I0927 16:56:16.768332       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 16:56:16.865291       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 16:56:16.865317       1 shared_informer.go:320] Caches are synced for service config
	I0927 16:56:16.868767       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d7e8b31fd6c9] <==
	W0927 16:56:08.295688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 16:56:08.295693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:08.295707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 16:56:08.295715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:08.295802       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 16:56:08.295814       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 16:56:09.107605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 16:56:09.107924       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:09.122683       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 16:56:09.122727       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 16:56:09.201563       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 16:56:09.201712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:09.251916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 16:56:09.252061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:09.253891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 16:56:09.253912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:09.258785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 16:56:09.258854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:09.265454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 16:56:09.265517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:09.313068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 16:56:09.313188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 16:56:09.314803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 16:56:09.314849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0927 16:56:11.797874       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 17:08:23 addons-289000 kubelet[2062]: I0927 17:08:23.774502    2062 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1435e68a-e80d-494e-a12f-319d722732ca-data\") pod \"1435e68a-e80d-494e-a12f-319d722732ca\" (UID: \"1435e68a-e80d-494e-a12f-319d722732ca\") "
	Sep 27 17:08:23 addons-289000 kubelet[2062]: I0927 17:08:23.774941    2062 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1435e68a-e80d-494e-a12f-319d722732ca-data" (OuterVolumeSpecName: "data") pod "1435e68a-e80d-494e-a12f-319d722732ca" (UID: "1435e68a-e80d-494e-a12f-319d722732ca"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 17:08:23 addons-289000 kubelet[2062]: I0927 17:08:23.774981    2062 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1435e68a-e80d-494e-a12f-319d722732ca-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1435e68a-e80d-494e-a12f-319d722732ca" (UID: "1435e68a-e80d-494e-a12f-319d722732ca"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 17:08:23 addons-289000 kubelet[2062]: I0927 17:08:23.875443    2062 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1435e68a-e80d-494e-a12f-319d722732ca-gcp-creds\") on node \"addons-289000\" DevicePath \"\""
	Sep 27 17:08:23 addons-289000 kubelet[2062]: I0927 17:08:23.875463    2062 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1435e68a-e80d-494e-a12f-319d722732ca-data\") on node \"addons-289000\" DevicePath \"\""
	Sep 27 17:08:24 addons-289000 kubelet[2062]: I0927 17:08:24.783163    2062 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/1435e68a-e80d-494e-a12f-319d722732ca-script\") on node \"addons-289000\" DevicePath \"\""
	Sep 27 17:08:24 addons-289000 kubelet[2062]: I0927 17:08:24.783193    2062 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-48bnv\" (UniqueName: \"kubernetes.io/projected/1435e68a-e80d-494e-a12f-319d722732ca-kube-api-access-48bnv\") on node \"addons-289000\" DevicePath \"\""
	Sep 27 17:08:26 addons-289000 kubelet[2062]: E0927 17:08:26.381261    2062 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a2f1e578-7f4d-47a9-812a-e7ccce6c3c6d"
	Sep 27 17:08:26 addons-289000 kubelet[2062]: I0927 17:08:26.391965    2062 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1435e68a-e80d-494e-a12f-319d722732ca" path="/var/lib/kubelet/pods/1435e68a-e80d-494e-a12f-319d722732ca/volumes"
	Sep 27 17:08:27 addons-289000 kubelet[2062]: I0927 17:08:27.377250    2062 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-7cz7s" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:08:30 addons-289000 kubelet[2062]: E0927 17:08:30.379208    2062 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="755537e7-7644-4f2f-a848-5b44f3b93747"
	Sep 27 17:08:40 addons-289000 kubelet[2062]: E0927 17:08:40.380712    2062 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a2f1e578-7f4d-47a9-812a-e7ccce6c3c6d"
	Sep 27 17:08:43 addons-289000 kubelet[2062]: E0927 17:08:43.378361    2062 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="755537e7-7644-4f2f-a848-5b44f3b93747"
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.307112    2062 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbvjm\" (UniqueName: \"kubernetes.io/projected/755537e7-7644-4f2f-a848-5b44f3b93747-kube-api-access-nbvjm\") pod \"755537e7-7644-4f2f-a848-5b44f3b93747\" (UID: \"755537e7-7644-4f2f-a848-5b44f3b93747\") "
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.307133    2062 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/755537e7-7644-4f2f-a848-5b44f3b93747-gcp-creds\") pod \"755537e7-7644-4f2f-a848-5b44f3b93747\" (UID: \"755537e7-7644-4f2f-a848-5b44f3b93747\") "
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.307193    2062 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/755537e7-7644-4f2f-a848-5b44f3b93747-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "755537e7-7644-4f2f-a848-5b44f3b93747" (UID: "755537e7-7644-4f2f-a848-5b44f3b93747"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.311037    2062 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755537e7-7644-4f2f-a848-5b44f3b93747-kube-api-access-nbvjm" (OuterVolumeSpecName: "kube-api-access-nbvjm") pod "755537e7-7644-4f2f-a848-5b44f3b93747" (UID: "755537e7-7644-4f2f-a848-5b44f3b93747"). InnerVolumeSpecName "kube-api-access-nbvjm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.407971    2062 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nbvjm\" (UniqueName: \"kubernetes.io/projected/755537e7-7644-4f2f-a848-5b44f3b93747-kube-api-access-nbvjm\") on node \"addons-289000\" DevicePath \"\""
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.407991    2062 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/755537e7-7644-4f2f-a848-5b44f3b93747-gcp-creds\") on node \"addons-289000\" DevicePath \"\""
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.509085    2062 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xr96\" (UniqueName: \"kubernetes.io/projected/2314f77d-9c59-460e-a0dc-812866fd625b-kube-api-access-5xr96\") pod \"2314f77d-9c59-460e-a0dc-812866fd625b\" (UID: \"2314f77d-9c59-460e-a0dc-812866fd625b\") "
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.509112    2062 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4vf\" (UniqueName: \"kubernetes.io/projected/244f365b-caba-42ac-9269-727d7fcfef8d-kube-api-access-7j4vf\") pod \"244f365b-caba-42ac-9269-727d7fcfef8d\" (UID: \"244f365b-caba-42ac-9269-727d7fcfef8d\") "
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.509693    2062 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2314f77d-9c59-460e-a0dc-812866fd625b-kube-api-access-5xr96" (OuterVolumeSpecName: "kube-api-access-5xr96") pod "2314f77d-9c59-460e-a0dc-812866fd625b" (UID: "2314f77d-9c59-460e-a0dc-812866fd625b"). InnerVolumeSpecName "kube-api-access-5xr96". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.509715    2062 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244f365b-caba-42ac-9269-727d7fcfef8d-kube-api-access-7j4vf" (OuterVolumeSpecName: "kube-api-access-7j4vf") pod "244f365b-caba-42ac-9269-727d7fcfef8d" (UID: "244f365b-caba-42ac-9269-727d7fcfef8d"). InnerVolumeSpecName "kube-api-access-7j4vf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.610063    2062 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5xr96\" (UniqueName: \"kubernetes.io/projected/2314f77d-9c59-460e-a0dc-812866fd625b-kube-api-access-5xr96\") on node \"addons-289000\" DevicePath \"\""
	Sep 27 17:08:44 addons-289000 kubelet[2062]: I0927 17:08:44.610081    2062 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7j4vf\" (UniqueName: \"kubernetes.io/projected/244f365b-caba-42ac-9269-727d7fcfef8d-kube-api-access-7j4vf\") on node \"addons-289000\" DevicePath \"\""
	
	
	==> storage-provisioner [6bfc4ccb39f3] <==
	I0927 16:56:17.760827       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 16:56:17.768451       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 16:56:17.768563       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 16:56:17.772688       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 16:56:17.772847       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-289000_ec0e87c8-0bd6-4030-88af-ad589571f0f0!
	I0927 16:56:17.773667       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c18bbc8a-79ca-4d47-b2e9-48104d8be45d", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-289000_ec0e87c8-0bd6-4030-88af-ad589571f0f0 became leader
	I0927 16:56:17.873581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-289000_ec0e87c8-0bd6-4030-88af-ad589571f0f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-289000 -n addons-289000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-289000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-jnnbg ingress-nginx-admission-patch-w8blc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-289000 describe pod busybox ingress-nginx-admission-create-jnnbg ingress-nginx-admission-patch-w8blc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-289000 describe pod busybox ingress-nginx-admission-create-jnnbg ingress-nginx-admission-patch-w8blc: exit status 1 (43.523125ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-289000/192.168.105.2
	Start Time:       Fri, 27 Sep 2024 09:59:32 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvkbv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vvkbv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-289000
	  Normal   Pulling    7m40s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m40s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m40s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jnnbg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-w8blc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-289000 describe pod busybox ingress-nginx-admission-create-jnnbg ingress-nginx-admission-patch-w8blc: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.30s)

                                                
                                    
x
+
TestCertOptions (10.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-200000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-200000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.141588167s)

                                                
                                                
-- stdout --
	* [cert-options-200000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-200000" primary control-plane node in "cert-options-200000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-200000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-200000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-200000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-200000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.130459ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-200000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-200000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-200000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-200000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-200000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-200000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.333625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-200000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-200000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-200000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-200000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-200000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-27 10:35:28.514637 -0700 PDT m=+2429.890686835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-200000 -n cert-options-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-200000 -n cert-options-200000: exit status 7 (31.122125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-200000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-200000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-200000
--- FAIL: TestCertOptions (10.41s)

                                                
                                    
x
+
TestCertExpiration (195.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-754000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-754000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.956141875s)

                                                
                                                
-- stdout --
	* [cert-expiration-754000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-754000" primary control-plane node in "cert-expiration-754000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-754000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-754000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-754000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-754000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.211039875s)

                                                
                                                
-- stdout --
	* [cert-expiration-754000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-754000" primary control-plane node in "cert-expiration-754000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-754000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-754000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-754000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-754000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-754000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-754000" primary control-plane node in "cert-expiration-754000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-754000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-754000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-754000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-27 10:38:28.293932 -0700 PDT m=+2609.674665210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-754000 -n cert-expiration-754000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-754000 -n cert-expiration-754000: exit status 7 (66.71625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-754000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-754000
--- FAIL: TestCertExpiration (195.32s)

                                                
                                    
x
+
TestDockerFlags (10.06s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-126000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-126000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.82355825s)

                                                
                                                
-- stdout --
	* [docker-flags-126000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-126000" primary control-plane node in "docker-flags-126000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-126000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:35:08.191646    4895 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:35:08.191768    4895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:35:08.191772    4895 out.go:358] Setting ErrFile to fd 2...
	I0927 10:35:08.191774    4895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:35:08.191912    4895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:35:08.193000    4895 out.go:352] Setting JSON to false
	I0927 10:35:08.209167    4895 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3872,"bootTime":1727454636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:35:08.209235    4895 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:35:08.216216    4895 out.go:177] * [docker-flags-126000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:35:08.222154    4895 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:35:08.222222    4895 notify.go:220] Checking for updates...
	I0927 10:35:08.233259    4895 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:35:08.236122    4895 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:35:08.239164    4895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:35:08.242236    4895 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:35:08.245184    4895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:35:08.248434    4895 config.go:182] Loaded profile config "force-systemd-flag-706000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:35:08.248516    4895 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:35:08.248567    4895 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:35:08.252162    4895 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:35:08.259150    4895 start.go:297] selected driver: qemu2
	I0927 10:35:08.259156    4895 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:35:08.259163    4895 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:35:08.261636    4895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:35:08.264158    4895 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:35:08.267155    4895 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0927 10:35:08.267176    4895 cni.go:84] Creating CNI manager for ""
	I0927 10:35:08.267219    4895 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:35:08.267223    4895 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:35:08.267251    4895 start.go:340] cluster config:
	{Name:docker-flags-126000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:35:08.270848    4895 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:35:08.278141    4895 out.go:177] * Starting "docker-flags-126000" primary control-plane node in "docker-flags-126000" cluster
	I0927 10:35:08.282134    4895 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:35:08.282152    4895 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:35:08.282162    4895 cache.go:56] Caching tarball of preloaded images
	I0927 10:35:08.282237    4895 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:35:08.282251    4895 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:35:08.282318    4895 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/docker-flags-126000/config.json ...
	I0927 10:35:08.282336    4895 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/docker-flags-126000/config.json: {Name:mk4811b3c38f1c7a8f8ddbb2a60ad2b44aed4b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:35:08.282741    4895 start.go:360] acquireMachinesLock for docker-flags-126000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:35:08.282780    4895 start.go:364] duration metric: took 31.792µs to acquireMachinesLock for "docker-flags-126000"
	I0927 10:35:08.282792    4895 start.go:93] Provisioning new machine with config: &{Name:docker-flags-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:35:08.282822    4895 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:35:08.292131    4895 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:35:08.310743    4895 start.go:159] libmachine.API.Create for "docker-flags-126000" (driver="qemu2")
	I0927 10:35:08.310782    4895 client.go:168] LocalClient.Create starting
	I0927 10:35:08.310863    4895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:35:08.310891    4895 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:08.310900    4895 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:08.310942    4895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:35:08.310969    4895 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:08.310977    4895 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:08.311463    4895 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:35:08.473273    4895 main.go:141] libmachine: Creating SSH key...
	I0927 10:35:08.508137    4895 main.go:141] libmachine: Creating Disk image...
	I0927 10:35:08.508142    4895 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:35:08.508342    4895 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2
	I0927 10:35:08.517461    4895 main.go:141] libmachine: STDOUT: 
	I0927 10:35:08.517489    4895 main.go:141] libmachine: STDERR: 
	I0927 10:35:08.517546    4895 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2 +20000M
	I0927 10:35:08.525571    4895 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:35:08.525586    4895 main.go:141] libmachine: STDERR: 
	I0927 10:35:08.525598    4895 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2
	I0927 10:35:08.525603    4895 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:35:08.525619    4895 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:35:08.525650    4895 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:39:6d:b6:6f:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2
	I0927 10:35:08.527277    4895 main.go:141] libmachine: STDOUT: 
	I0927 10:35:08.527291    4895 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:35:08.527312    4895 client.go:171] duration metric: took 216.529458ms to LocalClient.Create
	I0927 10:35:10.529432    4895 start.go:128] duration metric: took 2.246645709s to createHost
	I0927 10:35:10.529548    4895 start.go:83] releasing machines lock for "docker-flags-126000", held for 2.246780042s
	W0927 10:35:10.529617    4895 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:35:10.547597    4895 out.go:177] * Deleting "docker-flags-126000" in qemu2 ...
	W0927 10:35:10.574650    4895 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:35:10.574687    4895 start.go:729] Will try again in 5 seconds ...
	I0927 10:35:15.576724    4895 start.go:360] acquireMachinesLock for docker-flags-126000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:35:15.576960    4895 start.go:364] duration metric: took 173.5µs to acquireMachinesLock for "docker-flags-126000"
	I0927 10:35:15.577005    4895 start.go:93] Provisioning new machine with config: &{Name:docker-flags-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:35:15.577165    4895 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:35:15.591888    4895 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:35:15.634754    4895 start.go:159] libmachine.API.Create for "docker-flags-126000" (driver="qemu2")
	I0927 10:35:15.634804    4895 client.go:168] LocalClient.Create starting
	I0927 10:35:15.634913    4895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:35:15.634979    4895 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:15.634993    4895 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:15.635058    4895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:35:15.635117    4895 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:15.635128    4895 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:15.636369    4895 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:35:15.819065    4895 main.go:141] libmachine: Creating SSH key...
	I0927 10:35:15.914497    4895 main.go:141] libmachine: Creating Disk image...
	I0927 10:35:15.914503    4895 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:35:15.914693    4895 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2
	I0927 10:35:15.923752    4895 main.go:141] libmachine: STDOUT: 
	I0927 10:35:15.923781    4895 main.go:141] libmachine: STDERR: 
	I0927 10:35:15.923846    4895 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2 +20000M
	I0927 10:35:15.931605    4895 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:35:15.931625    4895 main.go:141] libmachine: STDERR: 
	I0927 10:35:15.931639    4895 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2
	I0927 10:35:15.931643    4895 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:35:15.931654    4895 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:35:15.931679    4895 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:a9:a5:b8:73:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/docker-flags-126000/disk.qcow2
	I0927 10:35:15.933242    4895 main.go:141] libmachine: STDOUT: 
	I0927 10:35:15.933256    4895 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:35:15.933273    4895 client.go:171] duration metric: took 298.469666ms to LocalClient.Create
	I0927 10:35:17.935396    4895 start.go:128] duration metric: took 2.358255208s to createHost
	I0927 10:35:17.935459    4895 start.go:83] releasing machines lock for "docker-flags-126000", held for 2.358539542s
	W0927 10:35:17.935878    4895 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:35:17.952598    4895 out.go:201] 
	W0927 10:35:17.956441    4895 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:35:17.956464    4895 out.go:270] * 
	* 
	W0927 10:35:17.959213    4895 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:35:17.972473    4895 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-126000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-126000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-126000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.388833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-126000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-126000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-126000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-126000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-126000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-126000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-126000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-126000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.891208ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-126000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-126000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-126000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-126000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-126000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-27 10:35:18.113361 -0700 PDT m=+2419.489139876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-126000 -n docker-flags-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-126000 -n docker-flags-126000: exit status 7 (29.765333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-126000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-126000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-126000
--- FAIL: TestDockerFlags (10.06s)

                                                
                                    
x
+
TestForceSystemdFlag (10.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-706000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-706000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.1165605s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-706000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-706000" primary control-plane node in "force-systemd-flag-706000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-706000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:35:02.840916    4874 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:35:02.841043    4874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:35:02.841048    4874 out.go:358] Setting ErrFile to fd 2...
	I0927 10:35:02.841050    4874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:35:02.841174    4874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:35:02.842240    4874 out.go:352] Setting JSON to false
	I0927 10:35:02.858417    4874 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3866,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:35:02.858495    4874 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:35:02.867218    4874 out.go:177] * [force-systemd-flag-706000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:35:02.888247    4874 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:35:02.888281    4874 notify.go:220] Checking for updates...
	I0927 10:35:02.897131    4874 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:35:02.900081    4874 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:35:02.903187    4874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:35:02.906192    4874 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:35:02.907764    4874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:35:02.911531    4874 config.go:182] Loaded profile config "force-systemd-env-679000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:35:02.911640    4874 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:35:02.911700    4874 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:35:02.916161    4874 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:35:02.921155    4874 start.go:297] selected driver: qemu2
	I0927 10:35:02.921163    4874 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:35:02.921173    4874 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:35:02.923726    4874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:35:02.927125    4874 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:35:02.930308    4874 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 10:35:02.930330    4874 cni.go:84] Creating CNI manager for ""
	I0927 10:35:02.930370    4874 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:35:02.930375    4874 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:35:02.930408    4874 start.go:340] cluster config:
	{Name:force-systemd-flag-706000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:35:02.934666    4874 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:35:02.942163    4874 out.go:177] * Starting "force-systemd-flag-706000" primary control-plane node in "force-systemd-flag-706000" cluster
	I0927 10:35:02.946079    4874 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:35:02.946101    4874 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:35:02.946111    4874 cache.go:56] Caching tarball of preloaded images
	I0927 10:35:02.946191    4874 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:35:02.946203    4874 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:35:02.946264    4874 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/force-systemd-flag-706000/config.json ...
	I0927 10:35:02.946278    4874 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/force-systemd-flag-706000/config.json: {Name:mk51000381a4aa1139af53f5e1ec04eb20cf9200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:35:02.946547    4874 start.go:360] acquireMachinesLock for force-systemd-flag-706000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:35:02.946591    4874 start.go:364] duration metric: took 32.958µs to acquireMachinesLock for "force-systemd-flag-706000"
	I0927 10:35:02.946607    4874 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:35:02.946649    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:35:02.954172    4874 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:35:02.973914    4874 start.go:159] libmachine.API.Create for "force-systemd-flag-706000" (driver="qemu2")
	I0927 10:35:02.973947    4874 client.go:168] LocalClient.Create starting
	I0927 10:35:02.974017    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:35:02.974053    4874 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:02.974063    4874 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:02.974103    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:35:02.974136    4874 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:02.974145    4874 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:02.974566    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:35:03.134842    4874 main.go:141] libmachine: Creating SSH key...
	I0927 10:35:03.329473    4874 main.go:141] libmachine: Creating Disk image...
	I0927 10:35:03.329480    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:35:03.329695    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2
	I0927 10:35:03.339131    4874 main.go:141] libmachine: STDOUT: 
	I0927 10:35:03.339154    4874 main.go:141] libmachine: STDERR: 
	I0927 10:35:03.339210    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2 +20000M
	I0927 10:35:03.347209    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:35:03.347235    4874 main.go:141] libmachine: STDERR: 
	I0927 10:35:03.347253    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2
	I0927 10:35:03.347258    4874 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:35:03.347276    4874 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:35:03.347315    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:41:04:ad:b4:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2
	I0927 10:35:03.349004    4874 main.go:141] libmachine: STDOUT: 
	I0927 10:35:03.349016    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:35:03.349031    4874 client.go:171] duration metric: took 375.088875ms to LocalClient.Create
	I0927 10:35:05.351144    4874 start.go:128] duration metric: took 2.404537291s to createHost
	I0927 10:35:05.351201    4874 start.go:83] releasing machines lock for "force-systemd-flag-706000", held for 2.404662292s
	W0927 10:35:05.351259    4874 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:35:05.367354    4874 out.go:177] * Deleting "force-systemd-flag-706000" in qemu2 ...
	W0927 10:35:05.404427    4874 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:35:05.404452    4874 start.go:729] Will try again in 5 seconds ...
	I0927 10:35:10.404580    4874 start.go:360] acquireMachinesLock for force-systemd-flag-706000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:35:10.529657    4874 start.go:364] duration metric: took 124.996959ms to acquireMachinesLock for "force-systemd-flag-706000"
	I0927 10:35:10.529814    4874 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:35:10.530072    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:35:10.535689    4874 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:35:10.586601    4874 start.go:159] libmachine.API.Create for "force-systemd-flag-706000" (driver="qemu2")
	I0927 10:35:10.586658    4874 client.go:168] LocalClient.Create starting
	I0927 10:35:10.586773    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:35:10.586839    4874 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:10.586857    4874 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:10.586925    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:35:10.586968    4874 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:10.586982    4874 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:10.587581    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:35:10.768725    4874 main.go:141] libmachine: Creating SSH key...
	I0927 10:35:10.850895    4874 main.go:141] libmachine: Creating Disk image...
	I0927 10:35:10.850901    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:35:10.851093    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2
	I0927 10:35:10.860182    4874 main.go:141] libmachine: STDOUT: 
	I0927 10:35:10.860201    4874 main.go:141] libmachine: STDERR: 
	I0927 10:35:10.860274    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2 +20000M
	I0927 10:35:10.868042    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:35:10.868057    4874 main.go:141] libmachine: STDERR: 
	I0927 10:35:10.868070    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2
	I0927 10:35:10.868074    4874 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:35:10.868098    4874 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:35:10.868137    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:6e:88:2b:df:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-flag-706000/disk.qcow2
	I0927 10:35:10.869742    4874 main.go:141] libmachine: STDOUT: 
	I0927 10:35:10.869756    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:35:10.869769    4874 client.go:171] duration metric: took 283.114459ms to LocalClient.Create
	I0927 10:35:12.872048    4874 start.go:128] duration metric: took 2.341888792s to createHost
	I0927 10:35:12.872139    4874 start.go:83] releasing machines lock for "force-systemd-flag-706000", held for 2.342512625s
	W0927 10:35:12.872474    4874 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-706000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-706000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:35:12.892459    4874 out.go:201] 
	W0927 10:35:12.901356    4874 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:35:12.901382    4874 out.go:270] * 
	* 
	W0927 10:35:12.903794    4874 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:35:12.915253    4874 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-706000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-706000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-706000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.648292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-706000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-706000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-706000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-27 10:35:13.011841 -0700 PDT m=+2414.387486293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-706000 -n force-systemd-flag-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-706000 -n force-systemd-flag-706000: exit status 7 (35.554541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-706000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-706000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-706000
--- FAIL: TestForceSystemdFlag (10.32s)

                                                
                                    
x
+
TestForceSystemdEnv (12.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-679000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0927 10:34:57.640179    2039 install.go:79] stdout: 
W0927 10:34:57.640382    2039 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0927 10:34:57.640409    2039 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit]
I0927 10:34:57.654093    2039 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit]
I0927 10:34:57.664585    2039 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit]
I0927 10:34:57.673374    2039 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit]
I0927 10:34:57.689056    2039 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 10:34:57.689156    2039 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0927 10:34:59.468841    2039 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0927 10:34:59.468862    2039 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0927 10:34:59.468911    2039 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0927 10:34:59.468942    2039 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit
I0927 10:34:59.860271    2039 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40] Decompressors:map[bz2:0x140004ff6f0 gz:0x140004ff6f8 tar:0x140004ff6a0 tar.bz2:0x140004ff6b0 tar.gz:0x140004ff6c0 tar.xz:0x140004ff6d0 tar.zst:0x140004ff6e0 tbz2:0x140004ff6b0 tgz:0x140004ff6c0 txz:0x140004ff6d0 tzst:0x140004ff6e0 xz:0x140004ff700 zip:0x140004ff710 zst:0x140004ff708] Getters:map[file:0x1400056ed90 http:0x1400046a8c0 https:0x1400046a910] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0927 10:34:59.860417    2039 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit
I0927 10:35:02.768881    2039 install.go:79] stdout: 
W0927 10:35:02.769062    2039 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0927 10:35:02.769085    2039 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit]
I0927 10:35:02.783233    2039 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit]
I0927 10:35:02.794775    2039 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit]
I0927 10:35:02.803617    2039 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-679000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.041928792s)

                                                
                                                
-- stdout --
	* [force-systemd-env-679000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-679000" primary control-plane node in "force-systemd-env-679000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-679000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:34:55.953403    4840 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:34:55.953544    4840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:34:55.953547    4840 out.go:358] Setting ErrFile to fd 2...
	I0927 10:34:55.953550    4840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:34:55.953673    4840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:34:55.954752    4840 out.go:352] Setting JSON to false
	I0927 10:34:55.970747    4840 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3859,"bootTime":1727454636,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:34:55.970808    4840 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:34:55.977974    4840 out.go:177] * [force-systemd-env-679000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:34:55.987866    4840 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:34:55.987900    4840 notify.go:220] Checking for updates...
	I0927 10:34:55.994847    4840 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:34:55.997870    4840 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:34:56.000737    4840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:34:56.003830    4840 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:34:56.006854    4840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0927 10:34:56.008630    4840 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:34:56.008674    4840 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:34:56.013497    4840 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:34:56.022740    4840 start.go:297] selected driver: qemu2
	I0927 10:34:56.022750    4840 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:34:56.022758    4840 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:34:56.025150    4840 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:34:56.027867    4840 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:34:56.030939    4840 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 10:34:56.030953    4840 cni.go:84] Creating CNI manager for ""
	I0927 10:34:56.030976    4840 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:34:56.030980    4840 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:34:56.031013    4840 start.go:340] cluster config:
	{Name:force-systemd-env-679000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-679000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:34:56.034755    4840 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:56.041839    4840 out.go:177] * Starting "force-systemd-env-679000" primary control-plane node in "force-systemd-env-679000" cluster
	I0927 10:34:56.045936    4840 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:34:56.045955    4840 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:34:56.045967    4840 cache.go:56] Caching tarball of preloaded images
	I0927 10:34:56.046054    4840 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:34:56.046060    4840 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:34:56.046116    4840 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/force-systemd-env-679000/config.json ...
	I0927 10:34:56.046128    4840 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/force-systemd-env-679000/config.json: {Name:mkeb19c04edf83b1160464e72e50cd344055139d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:34:56.046530    4840 start.go:360] acquireMachinesLock for force-systemd-env-679000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:34:56.046567    4840 start.go:364] duration metric: took 30.291µs to acquireMachinesLock for "force-systemd-env-679000"
	I0927 10:34:56.046580    4840 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-679000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-679000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:34:56.046615    4840 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:34:56.054827    4840 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:34:56.073367    4840 start.go:159] libmachine.API.Create for "force-systemd-env-679000" (driver="qemu2")
	I0927 10:34:56.073401    4840 client.go:168] LocalClient.Create starting
	I0927 10:34:56.073475    4840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:34:56.073515    4840 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:56.073525    4840 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:56.073562    4840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:34:56.073588    4840 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:56.073599    4840 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:56.074089    4840 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:34:56.234386    4840 main.go:141] libmachine: Creating SSH key...
	I0927 10:34:56.290699    4840 main.go:141] libmachine: Creating Disk image...
	I0927 10:34:56.290705    4840 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:34:56.290904    4840 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2
	I0927 10:34:56.299891    4840 main.go:141] libmachine: STDOUT: 
	I0927 10:34:56.299907    4840 main.go:141] libmachine: STDERR: 
	I0927 10:34:56.299967    4840 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2 +20000M
	I0927 10:34:56.307759    4840 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:34:56.307775    4840 main.go:141] libmachine: STDERR: 
	I0927 10:34:56.307814    4840 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2
	I0927 10:34:56.307818    4840 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:34:56.307830    4840 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:34:56.307865    4840 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:2e:31:f2:c5:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2
	I0927 10:34:56.309414    4840 main.go:141] libmachine: STDOUT: 
	I0927 10:34:56.309427    4840 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:34:56.309447    4840 client.go:171] duration metric: took 236.046292ms to LocalClient.Create
	I0927 10:34:58.311477    4840 start.go:128] duration metric: took 2.264912042s to createHost
	I0927 10:34:58.311521    4840 start.go:83] releasing machines lock for "force-systemd-env-679000", held for 2.265005792s
	W0927 10:34:58.311540    4840 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:58.329732    4840 out.go:177] * Deleting "force-systemd-env-679000" in qemu2 ...
	W0927 10:34:58.343430    4840 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:58.343437    4840 start.go:729] Will try again in 5 seconds ...
	I0927 10:35:03.345415    4840 start.go:360] acquireMachinesLock for force-systemd-env-679000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:35:05.351350    4840 start.go:364] duration metric: took 2.005924709s to acquireMachinesLock for "force-systemd-env-679000"
	I0927 10:35:05.351467    4840 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-679000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-679000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:35:05.351723    4840 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:35:05.362364    4840 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0927 10:35:05.411772    4840 start.go:159] libmachine.API.Create for "force-systemd-env-679000" (driver="qemu2")
	I0927 10:35:05.411818    4840 client.go:168] LocalClient.Create starting
	I0927 10:35:05.411985    4840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:35:05.412046    4840 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:05.412064    4840 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:05.412131    4840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:35:05.412175    4840 main.go:141] libmachine: Decoding PEM data...
	I0927 10:35:05.412190    4840 main.go:141] libmachine: Parsing certificate...
	I0927 10:35:05.413701    4840 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:35:05.682589    4840 main.go:141] libmachine: Creating SSH key...
	I0927 10:35:05.888305    4840 main.go:141] libmachine: Creating Disk image...
	I0927 10:35:05.888316    4840 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:35:05.888575    4840 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2
	I0927 10:35:05.898113    4840 main.go:141] libmachine: STDOUT: 
	I0927 10:35:05.898195    4840 main.go:141] libmachine: STDERR: 
	I0927 10:35:05.898260    4840 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2 +20000M
	I0927 10:35:05.906118    4840 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:35:05.906137    4840 main.go:141] libmachine: STDERR: 
	I0927 10:35:05.906151    4840 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2
	I0927 10:35:05.906162    4840 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:35:05.906170    4840 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:35:05.906198    4840 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:1b:ef:21:89:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/force-systemd-env-679000/disk.qcow2
	I0927 10:35:05.907747    4840 main.go:141] libmachine: STDOUT: 
	I0927 10:35:05.907766    4840 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:35:05.907778    4840 client.go:171] duration metric: took 495.96725ms to LocalClient.Create
	I0927 10:35:07.909891    4840 start.go:128] duration metric: took 2.558196417s to createHost
	I0927 10:35:07.910045    4840 start.go:83] releasing machines lock for "force-systemd-env-679000", held for 2.558683208s
	W0927 10:35:07.910476    4840 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-679000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-679000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:35:07.934032    4840 out.go:201] 
	W0927 10:35:07.937826    4840 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:35:07.937847    4840 out.go:270] * 
	* 
	W0927 10:35:07.939751    4840 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:35:07.950896    4840 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-679000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-679000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-679000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.693292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-679000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-679000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-679000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-27 10:35:08.051247 -0700 PDT m=+2409.426763085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-679000 -n force-systemd-env-679000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-679000 -n force-systemd-env-679000: exit status 7 (34.558375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-679000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-679000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-679000
--- FAIL: TestForceSystemdEnv (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-513000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-513000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-npgjt" [0a5cc8df-3b79-4ee9-958f-e108e0a8125e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-npgjt" [0a5cc8df-3b79-4ee9-958f-e108e0a8125e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.015970542s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32665
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
I0927 10:14:23.705953    2039 retry.go:31] will retry after 998.242428ms: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
I0927 10:14:24.706811    2039 retry.go:31] will retry after 1.373981353s: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
I0927 10:14:26.083856    2039 retry.go:31] will retry after 2.445031095s: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
I0927 10:14:28.532462    2039 retry.go:31] will retry after 4.479282523s: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
I0927 10:14:33.014212    2039 retry.go:31] will retry after 3.975109807s: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
I0927 10:14:36.991881    2039 retry.go:31] will retry after 5.306500116s: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
I0927 10:14:42.301789    2039 retry.go:31] will retry after 10.985718929s: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32665: Get "http://192.168.105.4:32665": dial tcp 192.168.105.4:32665: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-513000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-npgjt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-513000/192.168.105.4
Start Time:       Fri, 27 Sep 2024 10:14:12 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://7ebf1abd8a20f508ff811967379c5179cf0c2ee88aa1e933ee52cf65d3aebc0c
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 27 Sep 2024 10:14:33 -0700
Finished:     Fri, 27 Sep 2024 10:14:33 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcdl6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bcdl6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  40s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-npgjt to functional-513000
Normal   Pulling    41s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     35s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.459s (5.459s including waiting). Image size: 84957542 bytes.
Normal   Created    20s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    20s (x3 over 35s)  kubelet            Started container echoserver-arm
Normal   Pulled     20s (x2 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    8s (x4 over 34s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-npgjt_default(0a5cc8df-3b79-4ee9-958f-e108e0a8125e)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-513000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-513000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.163.229
IPs:                      10.98.163.229
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32665/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-513000 -n functional-513000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                  Args                                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-513000 ssh sudo cat                                                                         | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | /etc/ssl/certs/3ec20f2e.0                                                                              |                   |         |         |                     |                     |
	| image   | functional-513000 image load --daemon                                                                  | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | kicbase/echo-server:functional-513000                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| image   | functional-513000 image ls                                                                             | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	| image   | functional-513000 image load --daemon                                                                  | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | kicbase/echo-server:functional-513000                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| image   | functional-513000 image ls                                                                             | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	| image   | functional-513000 image load --daemon                                                                  | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | kicbase/echo-server:functional-513000                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| image   | functional-513000 image ls                                                                             | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	| image   | functional-513000 image save                                                                           | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | kicbase/echo-server:functional-513000                                                                  |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| image   | functional-513000 image rm                                                                             | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | kicbase/echo-server:functional-513000                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| image   | functional-513000 image ls                                                                             | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	| image   | functional-513000 image load                                                                           | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| image   | functional-513000 image ls                                                                             | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	| image   | functional-513000 image save --daemon                                                                  | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | kicbase/echo-server:functional-513000                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| cp      | functional-513000 cp                                                                                   | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | testdata/cp-test.txt                                                                                   |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-513000 ssh -n                                                                               | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | functional-513000 sudo cat                                                                             |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| cp      | functional-513000 cp functional-513000:/home/docker/cp-test.txt                                        | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2233712225/001/cp-test.txt |                   |         |         |                     |                     |
	| ssh     | functional-513000 ssh -n                                                                               | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | functional-513000 sudo cat                                                                             |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| cp      | functional-513000 cp                                                                                   | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | testdata/cp-test.txt                                                                                   |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-513000 ssh -n                                                                               | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | functional-513000 sudo cat                                                                             |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-513000 ssh echo                                                                             | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | hello                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-513000 ssh cat                                                                              | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | /etc/hostname                                                                                          |                   |         |         |                     |                     |
	| tunnel  | functional-513000 tunnel                                                                               | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-513000 tunnel                                                                               | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-513000 tunnel                                                                               | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| service | functional-513000 service                                                                              | functional-513000 | jenkins | v1.34.0 | 27 Sep 24 10:14 PDT | 27 Sep 24 10:14 PDT |
	|         | hello-node-connect --url                                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 10:13:27
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 10:13:27.644906    3041 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:13:27.645039    3041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:13:27.645040    3041 out.go:358] Setting ErrFile to fd 2...
	I0927 10:13:27.645042    3041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:13:27.645186    3041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:13:27.646327    3041 out.go:352] Setting JSON to false
	I0927 10:13:27.663546    3041 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2571,"bootTime":1727454636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:13:27.663620    3041 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:13:27.667058    3041 out.go:177] * [functional-513000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:13:27.678077    3041 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:13:27.678140    3041 notify.go:220] Checking for updates...
	I0927 10:13:27.685013    3041 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:13:27.687988    3041 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:13:27.691032    3041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:13:27.694063    3041 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:13:27.697039    3041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:13:27.700288    3041 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:13:27.700332    3041 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:13:27.704986    3041 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:13:27.711965    3041 start.go:297] selected driver: qemu2
	I0927 10:13:27.711968    3041 start.go:901] validating driver "qemu2" against &{Name:functional-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:13:27.712009    3041 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:13:27.714245    3041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:13:27.714264    3041 cni.go:84] Creating CNI manager for ""
	I0927 10:13:27.714287    3041 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:13:27.714319    3041 start.go:340] cluster config:
	{Name:functional-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-513000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:13:27.717739    3041 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:13:27.724971    3041 out.go:177] * Starting "functional-513000" primary control-plane node in "functional-513000" cluster
	I0927 10:13:27.728968    3041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:13:27.728980    3041 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:13:27.728987    3041 cache.go:56] Caching tarball of preloaded images
	I0927 10:13:27.729049    3041 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:13:27.729053    3041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:13:27.729102    3041 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/config.json ...
	I0927 10:13:27.729568    3041 start.go:360] acquireMachinesLock for functional-513000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:13:27.729601    3041 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "functional-513000"
	I0927 10:13:27.729607    3041 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:13:27.729610    3041 fix.go:54] fixHost starting: 
	I0927 10:13:27.730186    3041 fix.go:112] recreateIfNeeded on functional-513000: state=Running err=<nil>
	W0927 10:13:27.730192    3041 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:13:27.734033    3041 out.go:177] * Updating the running qemu2 "functional-513000" VM ...
	I0927 10:13:27.741982    3041 machine.go:93] provisionDockerMachine start ...
	I0927 10:13:27.742018    3041 main.go:141] libmachine: Using SSH client type: native
	I0927 10:13:27.742126    3041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b4dc00] 0x104b50440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0927 10:13:27.742128    3041 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 10:13:27.784466    3041 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-513000
	
	I0927 10:13:27.784476    3041 buildroot.go:166] provisioning hostname "functional-513000"
	I0927 10:13:27.784518    3041 main.go:141] libmachine: Using SSH client type: native
	I0927 10:13:27.784637    3041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b4dc00] 0x104b50440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0927 10:13:27.784641    3041 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-513000 && echo "functional-513000" | sudo tee /etc/hostname
	I0927 10:13:27.830722    3041 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-513000
	
	I0927 10:13:27.830781    3041 main.go:141] libmachine: Using SSH client type: native
	I0927 10:13:27.830889    3041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b4dc00] 0x104b50440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0927 10:13:27.830895    3041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-513000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-513000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-513000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 10:13:27.872223    3041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 10:13:27.872231    3041 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19712-1508/.minikube CaCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19712-1508/.minikube}
	I0927 10:13:27.872238    3041 buildroot.go:174] setting up certificates
	I0927 10:13:27.872245    3041 provision.go:84] configureAuth start
	I0927 10:13:27.872249    3041 provision.go:143] copyHostCerts
	I0927 10:13:27.872311    3041 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem, removing ...
	I0927 10:13:27.872315    3041 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem
	I0927 10:13:27.872447    3041 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem (1078 bytes)
	I0927 10:13:27.872636    3041 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem, removing ...
	I0927 10:13:27.872637    3041 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem
	I0927 10:13:27.872836    3041 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem (1123 bytes)
	I0927 10:13:27.872999    3041 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem, removing ...
	I0927 10:13:27.873001    3041 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem
	I0927 10:13:27.873057    3041 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem (1679 bytes)
	I0927 10:13:27.873157    3041 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem org=jenkins.functional-513000 san=[127.0.0.1 192.168.105.4 functional-513000 localhost minikube]
	I0927 10:13:27.932199    3041 provision.go:177] copyRemoteCerts
	I0927 10:13:27.932235    3041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 10:13:27.932240    3041 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
	I0927 10:13:27.954570    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 10:13:27.962954    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 10:13:27.971050    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 10:13:27.979089    3041 provision.go:87] duration metric: took 106.837833ms to configureAuth
	I0927 10:13:27.979094    3041 buildroot.go:189] setting minikube options for container-runtime
	I0927 10:13:27.979183    3041 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:13:27.979219    3041 main.go:141] libmachine: Using SSH client type: native
	I0927 10:13:27.979301    3041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b4dc00] 0x104b50440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0927 10:13:27.979304    3041 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 10:13:28.020925    3041 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0927 10:13:28.020930    3041 buildroot.go:70] root file system type: tmpfs
	I0927 10:13:28.020976    3041 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 10:13:28.021040    3041 main.go:141] libmachine: Using SSH client type: native
	I0927 10:13:28.021138    3041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b4dc00] 0x104b50440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0927 10:13:28.021169    3041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 10:13:28.068168    3041 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 10:13:28.068214    3041 main.go:141] libmachine: Using SSH client type: native
	I0927 10:13:28.068320    3041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b4dc00] 0x104b50440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0927 10:13:28.068326    3041 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 10:13:28.110409    3041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 10:13:28.110416    3041 machine.go:96] duration metric: took 368.436333ms to provisionDockerMachine
	I0927 10:13:28.110425    3041 start.go:293] postStartSetup for "functional-513000" (driver="qemu2")
	I0927 10:13:28.110431    3041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 10:13:28.110488    3041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 10:13:28.110495    3041 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
	I0927 10:13:28.133232    3041 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 10:13:28.134828    3041 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 10:13:28.134832    3041 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/addons for local assets ...
	I0927 10:13:28.134919    3041 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/files for local assets ...
	I0927 10:13:28.135038    3041 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem -> 20392.pem in /etc/ssl/certs
	I0927 10:13:28.135155    3041 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/test/nested/copy/2039/hosts -> hosts in /etc/test/nested/copy/2039
	I0927 10:13:28.135189    3041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2039
	I0927 10:13:28.138754    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:13:28.147009    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/test/nested/copy/2039/hosts --> /etc/test/nested/copy/2039/hosts (40 bytes)
	I0927 10:13:28.155316    3041 start.go:296] duration metric: took 44.886709ms for postStartSetup
	I0927 10:13:28.155326    3041 fix.go:56] duration metric: took 425.723625ms for fixHost
	I0927 10:13:28.155372    3041 main.go:141] libmachine: Using SSH client type: native
	I0927 10:13:28.155473    3041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b4dc00] 0x104b50440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0927 10:13:28.155476    3041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 10:13:28.198840    3041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727457208.259236853
	
	I0927 10:13:28.198845    3041 fix.go:216] guest clock: 1727457208.259236853
	I0927 10:13:28.198849    3041 fix.go:229] Guest: 2024-09-27 10:13:28.259236853 -0700 PDT Remote: 2024-09-27 10:13:28.155327 -0700 PDT m=+0.530205043 (delta=103.909853ms)
	I0927 10:13:28.198859    3041 fix.go:200] guest clock delta is within tolerance: 103.909853ms
	I0927 10:13:28.198861    3041 start.go:83] releasing machines lock for "functional-513000", held for 469.265542ms
	I0927 10:13:28.199192    3041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 10:13:28.199193    3041 ssh_runner.go:195] Run: cat /version.json
	I0927 10:13:28.199200    3041 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
	I0927 10:13:28.199208    3041 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
	I0927 10:13:28.222491    3041 ssh_runner.go:195] Run: systemctl --version
	I0927 10:13:28.266328    3041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 10:13:28.268372    3041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 10:13:28.268405    3041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 10:13:28.271786    3041 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 10:13:28.271791    3041 start.go:495] detecting cgroup driver to use...
	I0927 10:13:28.271864    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:13:28.278264    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 10:13:28.282368    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 10:13:28.286310    3041 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 10:13:28.286338    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 10:13:28.290308    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:13:28.294349    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 10:13:28.298305    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:13:28.302210    3041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 10:13:28.306415    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 10:13:28.310488    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 10:13:28.314463    3041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 10:13:28.318452    3041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 10:13:28.322261    3041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 10:13:28.326229    3041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:13:28.448808    3041 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 10:13:28.455909    3041 start.go:495] detecting cgroup driver to use...
	I0927 10:13:28.455964    3041 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 10:13:28.465317    3041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:13:28.470895    3041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 10:13:28.480288    3041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:13:28.485856    3041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 10:13:28.491189    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:13:28.497867    3041 ssh_runner.go:195] Run: which cri-dockerd
	I0927 10:13:28.499101    3041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 10:13:28.503694    3041 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0927 10:13:28.510061    3041 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 10:13:28.611964    3041 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 10:13:28.734358    3041 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 10:13:28.734413    3041 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 10:13:28.741382    3041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:13:28.856353    3041 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 10:13:41.177832    3041 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.321667208s)
	I0927 10:13:41.177906    3041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 10:13:41.184202    3041 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0927 10:13:41.194349    3041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:13:41.200552    3041 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 10:13:41.292028    3041 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 10:13:41.375914    3041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:13:41.464851    3041 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 10:13:41.472071    3041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:13:41.477608    3041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:13:41.570756    3041 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 10:13:41.606764    3041 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 10:13:41.606854    3041 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 10:13:41.609784    3041 start.go:563] Will wait 60s for crictl version
	I0927 10:13:41.609824    3041 ssh_runner.go:195] Run: which crictl
	I0927 10:13:41.611354    3041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 10:13:41.623244    3041 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0927 10:13:41.623320    3041 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:13:41.630894    3041 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:13:41.643768    3041 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0927 10:13:41.643914    3041 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0927 10:13:41.649794    3041 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0927 10:13:41.654759    3041 kubeadm.go:883] updating cluster {Name:functional-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:functional-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 10:13:41.654826    3041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:13:41.654887    3041 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:13:41.661219    3041 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-513000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0927 10:13:41.661223    3041 docker.go:615] Images already preloaded, skipping extraction
	I0927 10:13:41.661286    3041 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:13:41.667103    3041 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-513000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0927 10:13:41.667108    3041 cache_images.go:84] Images are preloaded, skipping loading
	I0927 10:13:41.667115    3041 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.1 docker true true} ...
	I0927 10:13:41.667165    3041 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-513000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 10:13:41.667221    3041 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 10:13:41.688341    3041 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0927 10:13:41.688360    3041 cni.go:84] Creating CNI manager for ""
	I0927 10:13:41.688367    3041 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:13:41.688386    3041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 10:13:41.688394    3041 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-513000 NodeName:functional-513000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 10:13:41.688448    3041 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-513000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 10:13:41.688510    3041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 10:13:41.692031    3041 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 10:13:41.692063    3041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 10:13:41.695255    3041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 10:13:41.701251    3041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 10:13:41.707127    3041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0927 10:13:41.713297    3041 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0927 10:13:41.714884    3041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:13:41.807068    3041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:13:41.813289    3041 certs.go:68] Setting up /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000 for IP: 192.168.105.4
	I0927 10:13:41.813297    3041 certs.go:194] generating shared ca certs ...
	I0927 10:13:41.813307    3041 certs.go:226] acquiring lock for ca certs: {Name:mk0418f7d8f4c252d010b1c431fe702739668245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:13:41.813462    3041 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key
	I0927 10:13:41.813508    3041 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key
	I0927 10:13:41.813513    3041 certs.go:256] generating profile certs ...
	I0927 10:13:41.813581    3041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.key
	I0927 10:13:41.813654    3041 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/apiserver.key.c9f629bd
	I0927 10:13:41.813702    3041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/proxy-client.key
	I0927 10:13:41.813853    3041 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem (1338 bytes)
	W0927 10:13:41.813882    3041 certs.go:480] ignoring /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039_empty.pem, impossibly tiny 0 bytes
	I0927 10:13:41.813886    3041 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 10:13:41.813907    3041 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem (1078 bytes)
	I0927 10:13:41.813925    3041 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem (1123 bytes)
	I0927 10:13:41.813940    3041 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem (1679 bytes)
	I0927 10:13:41.813976    3041 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:13:41.814305    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 10:13:41.822791    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 10:13:41.831074    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 10:13:41.839284    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 10:13:41.847564    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 10:13:41.855794    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 10:13:41.864828    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 10:13:41.873733    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 10:13:41.882246    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem --> /usr/share/ca-certificates/2039.pem (1338 bytes)
	I0927 10:13:41.890613    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /usr/share/ca-certificates/20392.pem (1708 bytes)
	I0927 10:13:41.898938    3041 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 10:13:41.907188    3041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 10:13:41.913339    3041 ssh_runner.go:195] Run: openssl version
	I0927 10:13:41.915424    3041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20392.pem && ln -fs /usr/share/ca-certificates/20392.pem /etc/ssl/certs/20392.pem"
	I0927 10:13:41.919241    3041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20392.pem
	I0927 10:13:41.920827    3041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:11 /usr/share/ca-certificates/20392.pem
	I0927 10:13:41.920854    3041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20392.pem
	I0927 10:13:41.922996    3041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20392.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 10:13:41.926689    3041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 10:13:41.930732    3041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:13:41.932762    3041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:13:41.932789    3041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:13:41.934817    3041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 10:13:41.938475    3041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2039.pem && ln -fs /usr/share/ca-certificates/2039.pem /etc/ssl/certs/2039.pem"
	I0927 10:13:41.942535    3041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2039.pem
	I0927 10:13:41.944162    3041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:11 /usr/share/ca-certificates/2039.pem
	I0927 10:13:41.944186    3041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2039.pem
	I0927 10:13:41.946191    3041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2039.pem /etc/ssl/certs/51391683.0"
	I0927 10:13:41.949988    3041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 10:13:41.951806    3041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 10:13:41.953913    3041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 10:13:41.956279    3041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 10:13:41.958281    3041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 10:13:41.960368    3041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 10:13:41.962489    3041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 10:13:41.964643    3041 kubeadm.go:392] StartCluster: {Name:functional-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:functional-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:13:41.964716    3041 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:13:41.970068    3041 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 10:13:41.974174    3041 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 10:13:41.974181    3041 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 10:13:41.974205    3041 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 10:13:41.977803    3041 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:13:41.978082    3041 kubeconfig.go:125] found "functional-513000" server: "https://192.168.105.4:8441"
	I0927 10:13:41.978732    3041 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 10:13:41.982209    3041 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0927 10:13:41.982211    3041 kubeadm.go:1160] stopping kube-system containers ...
	I0927 10:13:41.982262    3041 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:13:41.989197    3041 docker.go:483] Stopping containers: [3142db68a67f 1b3227e6b1e6 6ba5a90de040 07d7153ebac7 f56e018f045b 8feb7f6b70d0 0c872f1ed380 a1f50ff565ca 7210194424bf 258a301cea7c 2db659b9c24a 4fcd27cd8d19 acc9fe1f63b0 cb0148a6b45b 39ec402cb8e3 1fe2c3e8c641 97dd2448441a 39b2f08131c8 657f99a350d4 0a87dc993009 691f389e7005 ed19c314e66e 7d3a5638ce3f ce06522f85ed 9f40d711352d f94f8d1bb6d3 97c7f38ff43b]
	I0927 10:13:41.989262    3041 ssh_runner.go:195] Run: docker stop 3142db68a67f 1b3227e6b1e6 6ba5a90de040 07d7153ebac7 f56e018f045b 8feb7f6b70d0 0c872f1ed380 a1f50ff565ca 7210194424bf 258a301cea7c 2db659b9c24a 4fcd27cd8d19 acc9fe1f63b0 cb0148a6b45b 39ec402cb8e3 1fe2c3e8c641 97dd2448441a 39b2f08131c8 657f99a350d4 0a87dc993009 691f389e7005 ed19c314e66e 7d3a5638ce3f ce06522f85ed 9f40d711352d f94f8d1bb6d3 97c7f38ff43b
	I0927 10:13:42.004124    3041 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 10:13:42.117245    3041 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:13:42.123514    3041 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 27 17:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 27 17:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 27 17:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 27 17:12 /etc/kubernetes/scheduler.conf
	
	I0927 10:13:42.123552    3041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0927 10:13:42.128265    3041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0927 10:13:42.132616    3041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0927 10:13:42.136936    3041 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:13:42.136970    3041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:13:42.141038    3041 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0927 10:13:42.144847    3041 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:13:42.144876    3041 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:13:42.148382    3041 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:13:42.152020    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:13:42.169776    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:13:42.778505    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:13:42.897578    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:13:42.935888    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:13:42.967794    3041 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:13:42.967875    3041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:13:43.469898    3041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:13:43.969015    3041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:13:43.974483    3041 api_server.go:72] duration metric: took 1.006706208s to wait for apiserver process to appear ...
	I0927 10:13:43.974489    3041 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:13:43.974510    3041 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0927 10:13:45.610554    3041 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 10:13:45.610563    3041 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 10:13:45.610568    3041 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0927 10:13:45.618126    3041 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 10:13:45.618130    3041 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 10:13:45.976553    3041 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0927 10:13:45.982806    3041 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 10:13:45.982816    3041 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 10:13:46.476550    3041 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0927 10:13:46.479686    3041 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 10:13:46.479692    3041 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 10:13:46.976632    3041 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0927 10:13:46.995976    3041 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0927 10:13:47.008878    3041 api_server.go:141] control plane version: v1.31.1
	I0927 10:13:47.008896    3041 api_server.go:131] duration metric: took 3.034450041s to wait for apiserver health ...
	I0927 10:13:47.008907    3041 cni.go:84] Creating CNI manager for ""
	I0927 10:13:47.008920    3041 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:13:47.014251    3041 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 10:13:47.017389    3041 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 10:13:47.027502    3041 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 10:13:47.041361    3041 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 10:13:47.050614    3041 system_pods.go:59] 7 kube-system pods found
	I0927 10:13:47.050634    3041 system_pods.go:61] "coredns-7c65d6cfc9-5xdkc" [ba429f7c-aa11-41fa-ae50-07ad940469f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 10:13:47.050639    3041 system_pods.go:61] "etcd-functional-513000" [1140869e-d417-4fdc-b145-81358a6fe26b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 10:13:47.050642    3041 system_pods.go:61] "kube-apiserver-functional-513000" [03d0d1a7-6ef0-4a29-b4fb-24418eede1e3] Pending
	I0927 10:13:47.050645    3041 system_pods.go:61] "kube-controller-manager-functional-513000" [c692a96a-a20c-441f-b88f-69bf70c99ec5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 10:13:47.050648    3041 system_pods.go:61] "kube-proxy-kbs6z" [6e35213f-77e7-4f66-8732-e45e063b3661] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 10:13:47.050652    3041 system_pods.go:61] "kube-scheduler-functional-513000" [071019b4-a615-4758-8d4e-471238c2cc67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 10:13:47.050654    3041 system_pods.go:61] "storage-provisioner" [4d7d0e23-a79b-48c4-bed9-2905f3ef1bbe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 10:13:47.050658    3041 system_pods.go:74] duration metric: took 9.292541ms to wait for pod list to return data ...
	I0927 10:13:47.050664    3041 node_conditions.go:102] verifying NodePressure condition ...
	I0927 10:13:47.053349    3041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 10:13:47.053358    3041 node_conditions.go:123] node cpu capacity is 2
	I0927 10:13:47.053369    3041 node_conditions.go:105] duration metric: took 2.702ms to run NodePressure ...
	I0927 10:13:47.053379    3041 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:13:47.300604    3041 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 10:13:47.303714    3041 kubeadm.go:739] kubelet initialised
	I0927 10:13:47.303720    3041 kubeadm.go:740] duration metric: took 3.106166ms waiting for restarted kubelet to initialise ...
	I0927 10:13:47.303724    3041 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 10:13:47.307195    3041 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace to be "Ready" ...
	I0927 10:13:49.313680    3041 pod_ready.go:103] pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace has status "Ready":"False"
	I0927 10:13:51.322567    3041 pod_ready.go:103] pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace has status "Ready":"False"
	I0927 10:13:51.822461    3041 pod_ready.go:93] pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace has status "Ready":"True"
	I0927 10:13:51.822491    3041 pod_ready.go:82] duration metric: took 4.515355125s for pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace to be "Ready" ...
	I0927 10:13:51.822509    3041 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:13:53.837754    3041 pod_ready.go:103] pod "etcd-functional-513000" in "kube-system" namespace has status "Ready":"False"
	I0927 10:13:56.337118    3041 pod_ready.go:103] pod "etcd-functional-513000" in "kube-system" namespace has status "Ready":"False"
	I0927 10:13:58.335271    3041 pod_ready.go:93] pod "etcd-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:13:58.335300    3041 pod_ready.go:82] duration metric: took 6.512882125s for pod "etcd-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:13:58.335317    3041 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:13:58.848627    3041 pod_ready.go:93] pod "kube-apiserver-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:13:58.848654    3041 pod_ready.go:82] duration metric: took 513.33375ms for pod "kube-apiserver-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:13:58.848669    3041 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:00.862785    3041 pod_ready.go:103] pod "kube-controller-manager-functional-513000" in "kube-system" namespace has status "Ready":"False"
	I0927 10:14:03.363869    3041 pod_ready.go:93] pod "kube-controller-manager-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:03.363901    3041 pod_ready.go:82] duration metric: took 4.515290417s for pod "kube-controller-manager-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.363922    3041 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kbs6z" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.372009    3041 pod_ready.go:93] pod "kube-proxy-kbs6z" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:03.372020    3041 pod_ready.go:82] duration metric: took 8.09ms for pod "kube-proxy-kbs6z" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.372031    3041 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.377912    3041 pod_ready.go:93] pod "kube-scheduler-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:03.377923    3041 pod_ready.go:82] duration metric: took 5.88625ms for pod "kube-scheduler-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.377934    3041 pod_ready.go:39] duration metric: took 16.0744635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 10:14:03.377960    3041 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 10:14:03.389738    3041 ops.go:34] apiserver oom_adj: -16
	I0927 10:14:03.389750    3041 kubeadm.go:597] duration metric: took 21.415910042s to restartPrimaryControlPlane
	I0927 10:14:03.389758    3041 kubeadm.go:394] duration metric: took 21.425463042s to StartCluster
	I0927 10:14:03.389776    3041 settings.go:142] acquiring lock: {Name:mk58fc55a93399a03fb1c9ac710554db41068524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:14:03.389991    3041 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:14:03.390702    3041 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:14:03.391156    3041 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:14:03.391178    3041 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 10:14:03.391284    3041 addons.go:69] Setting storage-provisioner=true in profile "functional-513000"
	I0927 10:14:03.391298    3041 addons.go:234] Setting addon storage-provisioner=true in "functional-513000"
	W0927 10:14:03.391303    3041 addons.go:243] addon storage-provisioner should already be in state true
	I0927 10:14:03.391314    3041 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:14:03.391321    3041 host.go:66] Checking if "functional-513000" exists ...
	I0927 10:14:03.391341    3041 addons.go:69] Setting default-storageclass=true in profile "functional-513000"
	I0927 10:14:03.391392    3041 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-513000"
	I0927 10:14:03.393292    3041 addons.go:234] Setting addon default-storageclass=true in "functional-513000"
	W0927 10:14:03.393298    3041 addons.go:243] addon default-storageclass should already be in state true
	I0927 10:14:03.393309    3041 host.go:66] Checking if "functional-513000" exists ...
	I0927 10:14:03.395208    3041 out.go:177] * Verifying Kubernetes components...
	I0927 10:14:03.395871    3041 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 10:14:03.399481    3041 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 10:14:03.399495    3041 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
	I0927 10:14:03.402174    3041 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:14:03.406224    3041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:14:03.410234    3041 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:14:03.410240    3041 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 10:14:03.410249    3041 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
	I0927 10:14:03.549289    3041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:14:03.556197    3041 node_ready.go:35] waiting up to 6m0s for node "functional-513000" to be "Ready" ...
	I0927 10:14:03.557784    3041 node_ready.go:49] node "functional-513000" has status "Ready":"True"
	I0927 10:14:03.557800    3041 node_ready.go:38] duration metric: took 1.582041ms for node "functional-513000" to be "Ready" ...
	I0927 10:14:03.557803    3041 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 10:14:03.558442    3041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 10:14:03.561078    3041 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.564002    3041 pod_ready.go:93] pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:03.564006    3041 pod_ready.go:82] duration metric: took 2.921667ms for pod "coredns-7c65d6cfc9-5xdkc" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.564009    3041 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.628100    3041 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:14:03.754458    3041 pod_ready.go:93] pod "etcd-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:03.754464    3041 pod_ready.go:82] duration metric: took 190.45575ms for pod "etcd-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.754468    3041 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:03.903994    3041 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0927 10:14:03.911708    3041 addons.go:510] duration metric: took 520.549709ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0927 10:14:04.155739    3041 pod_ready.go:93] pod "kube-apiserver-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:04.155754    3041 pod_ready.go:82] duration metric: took 401.286542ms for pod "kube-apiserver-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:04.155766    3041 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:04.560810    3041 pod_ready.go:93] pod "kube-controller-manager-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:04.560849    3041 pod_ready.go:82] duration metric: took 405.072542ms for pod "kube-controller-manager-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:04.560877    3041 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kbs6z" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:04.959359    3041 pod_ready.go:93] pod "kube-proxy-kbs6z" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:04.959395    3041 pod_ready.go:82] duration metric: took 398.5075ms for pod "kube-proxy-kbs6z" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:04.959423    3041 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:05.359150    3041 pod_ready.go:93] pod "kube-scheduler-functional-513000" in "kube-system" namespace has status "Ready":"True"
	I0927 10:14:05.359180    3041 pod_ready.go:82] duration metric: took 399.747083ms for pod "kube-scheduler-functional-513000" in "kube-system" namespace to be "Ready" ...
	I0927 10:14:05.359210    3041 pod_ready.go:39] duration metric: took 1.801422417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 10:14:05.359254    3041 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:14:05.359576    3041 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:14:05.379028    3041 api_server.go:72] duration metric: took 1.98787875s to wait for apiserver process to appear ...
	I0927 10:14:05.379045    3041 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:14:05.379068    3041 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0927 10:14:05.385943    3041 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0927 10:14:05.387268    3041 api_server.go:141] control plane version: v1.31.1
	I0927 10:14:05.387277    3041 api_server.go:131] duration metric: took 8.227584ms to wait for apiserver health ...
	I0927 10:14:05.387284    3041 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 10:14:05.559876    3041 system_pods.go:59] 7 kube-system pods found
	I0927 10:14:05.559892    3041 system_pods.go:61] "coredns-7c65d6cfc9-5xdkc" [ba429f7c-aa11-41fa-ae50-07ad940469f0] Running
	I0927 10:14:05.559896    3041 system_pods.go:61] "etcd-functional-513000" [1140869e-d417-4fdc-b145-81358a6fe26b] Running
	I0927 10:14:05.559901    3041 system_pods.go:61] "kube-apiserver-functional-513000" [03d0d1a7-6ef0-4a29-b4fb-24418eede1e3] Running
	I0927 10:14:05.559905    3041 system_pods.go:61] "kube-controller-manager-functional-513000" [c692a96a-a20c-441f-b88f-69bf70c99ec5] Running
	I0927 10:14:05.559917    3041 system_pods.go:61] "kube-proxy-kbs6z" [6e35213f-77e7-4f66-8732-e45e063b3661] Running
	I0927 10:14:05.559921    3041 system_pods.go:61] "kube-scheduler-functional-513000" [071019b4-a615-4758-8d4e-471238c2cc67] Running
	I0927 10:14:05.559927    3041 system_pods.go:61] "storage-provisioner" [4d7d0e23-a79b-48c4-bed9-2905f3ef1bbe] Running
	I0927 10:14:05.559932    3041 system_pods.go:74] duration metric: took 172.645417ms to wait for pod list to return data ...
	I0927 10:14:05.559940    3041 default_sa.go:34] waiting for default service account to be created ...
	I0927 10:14:05.757282    3041 default_sa.go:45] found service account: "default"
	I0927 10:14:05.757303    3041 default_sa.go:55] duration metric: took 197.360375ms for default service account to be created ...
	I0927 10:14:05.757318    3041 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 10:14:05.972155    3041 system_pods.go:86] 7 kube-system pods found
	I0927 10:14:05.972178    3041 system_pods.go:89] "coredns-7c65d6cfc9-5xdkc" [ba429f7c-aa11-41fa-ae50-07ad940469f0] Running
	I0927 10:14:05.972195    3041 system_pods.go:89] "etcd-functional-513000" [1140869e-d417-4fdc-b145-81358a6fe26b] Running
	I0927 10:14:05.972206    3041 system_pods.go:89] "kube-apiserver-functional-513000" [03d0d1a7-6ef0-4a29-b4fb-24418eede1e3] Running
	I0927 10:14:05.972213    3041 system_pods.go:89] "kube-controller-manager-functional-513000" [c692a96a-a20c-441f-b88f-69bf70c99ec5] Running
	I0927 10:14:05.972219    3041 system_pods.go:89] "kube-proxy-kbs6z" [6e35213f-77e7-4f66-8732-e45e063b3661] Running
	I0927 10:14:05.972224    3041 system_pods.go:89] "kube-scheduler-functional-513000" [071019b4-a615-4758-8d4e-471238c2cc67] Running
	I0927 10:14:05.972229    3041 system_pods.go:89] "storage-provisioner" [4d7d0e23-a79b-48c4-bed9-2905f3ef1bbe] Running
	I0927 10:14:05.972240    3041 system_pods.go:126] duration metric: took 214.917459ms to wait for k8s-apps to be running ...
	I0927 10:14:05.972252    3041 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 10:14:05.972486    3041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 10:14:05.990530    3041 system_svc.go:56] duration metric: took 18.276958ms WaitForService to wait for kubelet
	I0927 10:14:05.990544    3041 kubeadm.go:582] duration metric: took 2.599412333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:14:05.990563    3041 node_conditions.go:102] verifying NodePressure condition ...
	I0927 10:14:06.161292    3041 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 10:14:06.161315    3041 node_conditions.go:123] node cpu capacity is 2
	I0927 10:14:06.161343    3041 node_conditions.go:105] duration metric: took 170.774375ms to run NodePressure ...
	I0927 10:14:06.161371    3041 start.go:241] waiting for startup goroutines ...
	I0927 10:14:06.161387    3041 start.go:246] waiting for cluster config update ...
	I0927 10:14:06.161410    3041 start.go:255] writing updated cluster config ...
	I0927 10:14:06.162668    3041 ssh_runner.go:195] Run: rm -f paused
	I0927 10:14:06.224942    3041 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0927 10:14:06.228164    3041 out.go:201] 
	W0927 10:14:06.232117    3041 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0927 10:14:06.235961    3041 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0927 10:14:06.244123    3041 out.go:177] * Done! kubectl is now configured to use "functional-513000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 17:14:33 functional-513000 cri-dockerd[6057]: time="2024-09-27T17:14:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa6f67b27098e386fd5a326b462f62af52dc4b876559af4c0d87ab33557a5b00/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 17:14:38 functional-513000 cri-dockerd[6057]: time="2024-09-27T17:14:38Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Sep 27 17:14:38 functional-513000 dockerd[5807]: time="2024-09-27T17:14:38.820283249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 17:14:38 functional-513000 dockerd[5807]: time="2024-09-27T17:14:38.820329123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 17:14:38 functional-513000 dockerd[5807]: time="2024-09-27T17:14:38.820348581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 17:14:38 functional-513000 dockerd[5807]: time="2024-09-27T17:14:38.820382331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 17:14:45 functional-513000 dockerd[5807]: time="2024-09-27T17:14:45.773426087Z" level=info msg="shim disconnected" id=a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a namespace=moby
	Sep 27 17:14:45 functional-513000 dockerd[5807]: time="2024-09-27T17:14:45.773457170Z" level=warning msg="cleaning up after shim disconnected" id=a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a namespace=moby
	Sep 27 17:14:45 functional-513000 dockerd[5807]: time="2024-09-27T17:14:45.773461628Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:14:45 functional-513000 dockerd[5801]: time="2024-09-27T17:14:45.773649917Z" level=info msg="ignoring event" container=a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:14:45 functional-513000 dockerd[5807]: time="2024-09-27T17:14:45.777702436Z" level=warning msg="cleanup warnings time=\"2024-09-27T17:14:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 27 17:14:45 functional-513000 dockerd[5801]: time="2024-09-27T17:14:45.860776875Z" level=info msg="ignoring event" container=fa6f67b27098e386fd5a326b462f62af52dc4b876559af4c0d87ab33557a5b00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 17:14:45 functional-513000 dockerd[5807]: time="2024-09-27T17:14:45.861528447Z" level=info msg="shim disconnected" id=fa6f67b27098e386fd5a326b462f62af52dc4b876559af4c0d87ab33557a5b00 namespace=moby
	Sep 27 17:14:45 functional-513000 dockerd[5807]: time="2024-09-27T17:14:45.861559196Z" level=warning msg="cleaning up after shim disconnected" id=fa6f67b27098e386fd5a326b462f62af52dc4b876559af4c0d87ab33557a5b00 namespace=moby
	Sep 27 17:14:45 functional-513000 dockerd[5807]: time="2024-09-27T17:14:45.861564530Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 17:14:46 functional-513000 dockerd[5807]: time="2024-09-27T17:14:46.706784573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 17:14:46 functional-513000 dockerd[5807]: time="2024-09-27T17:14:46.706824156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 17:14:46 functional-513000 dockerd[5807]: time="2024-09-27T17:14:46.706835822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 17:14:46 functional-513000 dockerd[5807]: time="2024-09-27T17:14:46.706878655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 17:14:46 functional-513000 cri-dockerd[6057]: time="2024-09-27T17:14:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dfe155ac9fd6308b2d41a1c4d360ae26ba5b6e6750b48430bcecd7fca1ee7d32/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 17:14:47 functional-513000 cri-dockerd[6057]: time="2024-09-27T17:14:47Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 27 17:14:47 functional-513000 dockerd[5807]: time="2024-09-27T17:14:47.552953235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 17:14:47 functional-513000 dockerd[5807]: time="2024-09-27T17:14:47.553003151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 17:14:47 functional-513000 dockerd[5807]: time="2024-09-27T17:14:47.553020775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 17:14:47 functional-513000 dockerd[5807]: time="2024-09-27T17:14:47.553091149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dc2f18435bbf5       nginx@sha256:640ac1e1ca185051544c12ed0c32c3f0be5d35737482a323af1d3fa5f12574d6   6 seconds ago        Running             myfrontend                0                   dfe155ac9fd63       sp-pod
	7ebf1abd8a20f       72565bf5bbedf                                                                   20 seconds ago       Exited              echoserver-arm            2                   b3f6a026290fb       hello-node-connect-65d86f57f4-npgjt
	cbd76bc730241       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf   32 seconds ago       Running             nginx                     0                   51624cb195299       nginx-svc
	7b89e8836471d       ba04bb24b9575                                                                   51 seconds ago       Running             storage-provisioner       4                   d3e8c2cec228b       storage-provisioner
	d370836cb600f       2f6c962e7b831                                                                   About a minute ago   Running             coredns                   2                   e1cd6c375f987       coredns-7c65d6cfc9-5xdkc
	a6d95094fed5a       ba04bb24b9575                                                                   About a minute ago   Exited              storage-provisioner       3                   d3e8c2cec228b       storage-provisioner
	abd7153d2d929       24a140c548c07                                                                   About a minute ago   Running             kube-proxy                2                   9e48d06651440       kube-proxy-kbs6z
	323570af09957       7f8aa378bb47d                                                                   About a minute ago   Running             kube-scheduler            2                   46fe12e341616       kube-scheduler-functional-513000
	9ea98b848020d       27e3830e14027                                                                   About a minute ago   Running             etcd                      2                   11e0a129b63de       etcd-functional-513000
	ba1d748b4cabe       279f381cb3736                                                                   About a minute ago   Running             kube-controller-manager   2                   03953048ec08c       kube-controller-manager-functional-513000
	f0d460e1ef880       d3f53a98c0a9d                                                                   About a minute ago   Running             kube-apiserver            0                   c3bd6878d0eb5       kube-apiserver-functional-513000
	3142db68a67f4       2f6c962e7b831                                                                   About a minute ago   Exited              coredns                   1                   07d7153ebac7f       coredns-7c65d6cfc9-5xdkc
	6ba5a90de0408       24a140c548c07                                                                   About a minute ago   Exited              kube-proxy                1                   f56e018f045b0       kube-proxy-kbs6z
	0c872f1ed3808       279f381cb3736                                                                   About a minute ago   Exited              kube-controller-manager   1                   2db659b9c24a4       kube-controller-manager-functional-513000
	7210194424bf1       27e3830e14027                                                                   About a minute ago   Exited              etcd                      1                   4fcd27cd8d198       etcd-functional-513000
	258a301cea7cb       7f8aa378bb47d                                                                   About a minute ago   Exited              kube-scheduler            1                   cb0148a6b45b0       kube-scheduler-functional-513000
	
	
	==> coredns [3142db68a67f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37750 - 51745 "HINFO IN 4514773066666574265.8691031866243594909. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010135855s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d370836cb600] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42282 - 36835 "HINFO IN 7611999908053166039.322470128243938969. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011100687s
	[INFO] 10.244.0.1:43800 - 42195 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00011929s
	[INFO] 10.244.0.1:12227 - 26444 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000092082s
	[INFO] 10.244.0.1:50595 - 9094 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000906861s
	[INFO] 10.244.0.1:56123 - 18563 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000030958s
	[INFO] 10.244.0.1:34877 - 26631 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00007579s
	[INFO] 10.244.0.1:45081 - 46328 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000090707s
	
	
	==> describe nodes <==
	Name:               functional-513000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-513000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=functional-513000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T10_12_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:11:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-513000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:14:46 +0000   Fri, 27 Sep 2024 17:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:14:46 +0000   Fri, 27 Sep 2024 17:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:14:46 +0000   Fri, 27 Sep 2024 17:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:14:46 +0000   Fri, 27 Sep 2024 17:12:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-513000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 017b001be19e44fabbea6dfba8bc2480
	  System UUID:                017b001be19e44fabbea6dfba8bc2480
	  Boot ID:                    2f1a4dee-6ef7-4812-a0ad-cb1723d6059b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-65d86f57f4-npgjt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 coredns-7c65d6cfc9-5xdkc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m47s
	  kube-system                 etcd-functional-513000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m53s
	  kube-system                 kube-apiserver-functional-513000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-controller-manager-functional-513000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-proxy-kbs6z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-scheduler-functional-513000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 67s                    kube-proxy       
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m53s)  kubelet          Node functional-513000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m53s)  kubelet          Node functional-513000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m53s)  kubelet          Node functional-513000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m48s                  node-controller  Node functional-513000 event: Registered Node functional-513000 in Controller
	  Normal  NodeReady                2m48s                  kubelet          Node functional-513000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  115s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)    kubelet          Node functional-513000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)    kubelet          Node functional-513000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)    kubelet          Node functional-513000 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                   node-controller  Node functional-513000 event: Registered Node functional-513000 in Controller
	  Normal  Starting                 71s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node functional-513000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node functional-513000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x7 over 70s)      kubelet          Node functional-513000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s                    node-controller  Node functional-513000 event: Registered Node functional-513000 in Controller
	
	
	==> dmesg <==
	[  +0.220263] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +1.080755] systemd-fstab-generator[3979]: Ignoring "noauto" option for root device
	[Sep27 17:13] kauditd_printk_skb: 199 callbacks suppressed
	[ +14.139746] systemd-fstab-generator[4889]: Ignoring "noauto" option for root device
	[  +0.057081] kauditd_printk_skb: 33 callbacks suppressed
	[ +12.723987] systemd-fstab-generator[5330]: Ignoring "noauto" option for root device
	[  +0.051863] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.115120] systemd-fstab-generator[5364]: Ignoring "noauto" option for root device
	[  +0.120534] systemd-fstab-generator[5376]: Ignoring "noauto" option for root device
	[  +0.119805] systemd-fstab-generator[5394]: Ignoring "noauto" option for root device
	[  +5.131598] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.319367] systemd-fstab-generator[6010]: Ignoring "noauto" option for root device
	[  +0.085894] systemd-fstab-generator[6022]: Ignoring "noauto" option for root device
	[  +0.087991] systemd-fstab-generator[6034]: Ignoring "noauto" option for root device
	[  +0.106391] systemd-fstab-generator[6049]: Ignoring "noauto" option for root device
	[  +0.236907] systemd-fstab-generator[6214]: Ignoring "noauto" option for root device
	[  +1.081256] systemd-fstab-generator[6335]: Ignoring "noauto" option for root device
	[  +3.413974] kauditd_printk_skb: 198 callbacks suppressed
	[  +5.136209] kauditd_printk_skb: 36 callbacks suppressed
	[Sep27 17:14] systemd-fstab-generator[7456]: Ignoring "noauto" option for root device
	[  +4.250513] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.137311] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.592777] kauditd_printk_skb: 20 callbacks suppressed
	[ +14.603928] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.708315] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [7210194424bf] <==
	{"level":"info","ts":"2024-09-27T17:13:00.655237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T17:13:00.655325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-27T17:13:00.655744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T17:13:00.655769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-27T17:13:00.655847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T17:13:00.655870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-27T17:13:00.660491Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-513000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T17:13:00.660935Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:13:00.661058Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T17:13:00.661518Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T17:13:00.661109Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:13:00.663463Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T17:13:00.663462Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T17:13:00.666012Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T17:13:00.668196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-27T17:13:28.951868Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-27T17:13:28.951895Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-513000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-27T17:13:28.951933Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T17:13:28.951973Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T17:13:28.959872Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T17:13:28.959896Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T17:13:28.959917Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-27T17:13:28.963415Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-27T17:13:28.963484Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-27T17:13:28.963489Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-513000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [9ea98b848020] <==
	{"level":"info","ts":"2024-09-27T17:13:43.853143Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-27T17:13:43.853200Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:13:43.853233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:13:43.854325Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T17:13:43.864635Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T17:13:43.864745Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T17:13:43.864763Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T17:13:43.864854Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-27T17:13:43.864871Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-27T17:13:45.107741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-27T17:13:45.108940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-27T17:13:45.109209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-27T17:13:45.109377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-27T17:13:45.109571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-27T17:13:45.109752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-27T17:13:45.109886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-27T17:13:45.114775Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-513000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T17:13:45.115349Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:13:45.115882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:13:45.116329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T17:13:45.116493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T17:13:45.117439Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T17:13:45.117786Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T17:13:45.125378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T17:13:45.125909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 17:14:53 up 3 min,  0 users,  load average: 0.39, 0.36, 0.16
	Linux functional-513000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f0d460e1ef88] <==
	I0927 17:13:45.756360       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 17:13:45.756658       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 17:13:45.756746       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 17:13:45.757115       1 aggregator.go:171] initial CRD sync complete...
	I0927 17:13:45.757124       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 17:13:45.757127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 17:13:45.757129       1 cache.go:39] Caches are synced for autoregister controller
	I0927 17:13:45.758260       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 17:13:45.759599       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 17:13:45.761530       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 17:13:45.761535       1 policy_source.go:224] refreshing policies
	I0927 17:13:45.786152       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 17:13:46.660559       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 17:13:46.764343       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0927 17:13:46.765117       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 17:13:46.766763       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 17:13:47.190495       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 17:13:47.194253       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 17:13:47.205947       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 17:13:47.213269       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 17:13:47.215300       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 17:14:07.774497       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.71.242"}
	I0927 17:14:12.577417       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 17:14:12.619697       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.163.229"}
	I0927 17:14:17.087039       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.212.62"}
	
	
	==> kube-controller-manager [0c872f1ed380] <==
	I0927 17:13:04.532406       1 shared_informer.go:320] Caches are synced for stateful set
	I0927 17:13:04.533539       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0927 17:13:04.535178       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-513000\" does not exist"
	I0927 17:13:04.609864       1 shared_informer.go:320] Caches are synced for GC
	I0927 17:13:04.609909       1 shared_informer.go:320] Caches are synced for daemon sets
	I0927 17:13:04.626754       1 shared_informer.go:320] Caches are synced for TTL
	I0927 17:13:04.627907       1 shared_informer.go:320] Caches are synced for node
	I0927 17:13:04.628001       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0927 17:13:04.628022       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0927 17:13:04.628082       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0927 17:13:04.628092       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0927 17:13:04.628147       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-513000"
	I0927 17:13:04.628994       1 shared_informer.go:320] Caches are synced for persistent volume
	I0927 17:13:04.632196       1 shared_informer.go:320] Caches are synced for attach detach
	I0927 17:13:04.632257       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0927 17:13:04.632347       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0927 17:13:04.728353       1 shared_informer.go:320] Caches are synced for taint
	I0927 17:13:04.728458       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0927 17:13:04.728505       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-513000"
	I0927 17:13:04.728542       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0927 17:13:04.736391       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 17:13:04.736395       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 17:13:05.144603       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 17:13:05.182933       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 17:13:05.183079       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ba1d748b4cab] <==
	I0927 17:13:49.029602       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0927 17:13:49.079005       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0927 17:13:49.191209       1 shared_informer.go:320] Caches are synced for disruption
	I0927 17:13:49.198642       1 shared_informer.go:320] Caches are synced for stateful set
	I0927 17:13:49.216342       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0927 17:13:49.228565       1 shared_informer.go:320] Caches are synced for deployment
	I0927 17:13:49.231047       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 17:13:49.235133       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 17:13:49.385634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="169.238642ms"
	I0927 17:13:49.385847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="182.497µs"
	I0927 17:13:49.645786       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 17:13:49.685602       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 17:13:49.685652       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 17:13:51.446955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.006647ms"
	I0927 17:13:51.447241       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="69.582µs"
	I0927 17:14:12.586855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="7.665458ms"
	I0927 17:14:12.591495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="4.607424ms"
	I0927 17:14:12.599172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="7.649625ms"
	I0927 17:14:12.599234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="22.624µs"
	I0927 17:14:18.570203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.083µs"
	I0927 17:14:19.593611       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="120.498µs"
	I0927 17:14:20.594502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.792µs"
	I0927 17:14:33.778315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.541µs"
	I0927 17:14:45.054702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="49.541µs"
	I0927 17:14:46.702946       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-513000"
	
	
	==> kube-proxy [6ba5a90de040] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 17:13:01.769137       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 17:13:01.772680       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0927 17:13:01.772707       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:13:01.780937       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 17:13:01.780952       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 17:13:01.780963       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:13:01.781575       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:13:01.781716       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:13:01.781747       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:13:01.782191       1 config.go:199] "Starting service config controller"
	I0927 17:13:01.782204       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:13:01.782249       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:13:01.782255       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:13:01.782470       1 config.go:328] "Starting node config controller"
	I0927 17:13:01.782902       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 17:13:01.882816       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 17:13:01.882832       1 shared_informer.go:320] Caches are synced for service config
	I0927 17:13:01.882927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [abd7153d2d92] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 17:13:46.538090       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 17:13:46.542317       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0927 17:13:46.542345       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:13:46.550965       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 17:13:46.550983       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 17:13:46.551036       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:13:46.551616       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:13:46.551701       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:13:46.551708       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:13:46.552240       1 config.go:199] "Starting service config controller"
	I0927 17:13:46.552247       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:13:46.552257       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:13:46.552259       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:13:46.552467       1 config.go:328] "Starting node config controller"
	I0927 17:13:46.552478       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 17:13:46.657645       1 shared_informer.go:320] Caches are synced for node config
	I0927 17:13:46.657728       1 shared_informer.go:320] Caches are synced for service config
	I0927 17:13:46.657754       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [258a301cea7c] <==
	I0927 17:12:59.545090       1 serving.go:386] Generated self-signed cert in-memory
	W0927 17:13:01.179759       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 17:13:01.179884       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 17:13:01.179942       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 17:13:01.179981       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 17:13:01.213071       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 17:13:01.213090       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:13:01.213964       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 17:13:01.214013       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 17:13:01.214031       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 17:13:01.214038       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 17:13:01.315392       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 17:13:28.954295       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0927 17:13:28.954320       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0927 17:13:28.954357       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0927 17:13:28.954442       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [323570af0995] <==
	I0927 17:13:44.280275       1 serving.go:386] Generated self-signed cert in-memory
	W0927 17:13:45.680335       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 17:13:45.680352       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 17:13:45.680357       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 17:13:45.680360       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 17:13:45.697347       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 17:13:45.697363       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:13:45.701776       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 17:13:45.702695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 17:13:45.704138       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 17:13:45.702773       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 17:13:45.805395       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 17:14:39 functional-513000 kubelet[6342]: I0927 17:14:39.875353    6342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.10229031 podStartE2EDuration="6.875328785s" podCreationTimestamp="2024-09-27 17:14:33 +0000 UTC" firstStartedPulling="2024-09-27 17:14:33.968284688 +0000 UTC m=+51.014126243" lastFinishedPulling="2024-09-27 17:14:38.741323205 +0000 UTC m=+55.787164718" observedRunningTime="2024-09-27 17:14:39.875076705 +0000 UTC m=+56.920918260" watchObservedRunningTime="2024-09-27 17:14:39.875328785 +0000 UTC m=+56.921170381"
	Sep 27 17:14:43 functional-513000 kubelet[6342]: E0927 17:14:43.045602    6342 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 17:14:43 functional-513000 kubelet[6342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:14:43 functional-513000 kubelet[6342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:14:43 functional-513000 kubelet[6342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:14:43 functional-513000 kubelet[6342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:14:43 functional-513000 kubelet[6342]: I0927 17:14:43.085104    6342 scope.go:117] "RemoveContainer" containerID="a1f50ff565ca0492b515d5b25d14e8dc25b64c3d6a201173ee4040f02730eafa"
	Sep 27 17:14:45 functional-513000 kubelet[6342]: I0927 17:14:45.037665    6342 scope.go:117] "RemoveContainer" containerID="7ebf1abd8a20f508ff811967379c5179cf0c2ee88aa1e933ee52cf65d3aebc0c"
	Sep 27 17:14:45 functional-513000 kubelet[6342]: E0927 17:14:45.038650    6342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-npgjt_default(0a5cc8df-3b79-4ee9-958f-e108e0a8125e)\"" pod="default/hello-node-connect-65d86f57f4-npgjt" podUID="0a5cc8df-3b79-4ee9-958f-e108e0a8125e"
	Sep 27 17:14:45 functional-513000 kubelet[6342]: I0927 17:14:45.978366    6342 scope.go:117] "RemoveContainer" containerID="a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a"
	Sep 27 17:14:45 functional-513000 kubelet[6342]: I0927 17:14:45.984766    6342 scope.go:117] "RemoveContainer" containerID="a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a"
	Sep 27 17:14:45 functional-513000 kubelet[6342]: E0927 17:14:45.985108    6342 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a" containerID="a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a"
	Sep 27 17:14:45 functional-513000 kubelet[6342]: I0927 17:14:45.985129    6342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a"} err="failed to get container status \"a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a\": rpc error: code = Unknown desc = Error response from daemon: No such container: a816efee837bd984ca20c1cd6013413fcd8c786f83f358c928ddbd1396c40d9a"
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.068294    6342 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xptp6\" (UniqueName: \"kubernetes.io/projected/0676218c-07b4-489c-8f1e-b40ae63d1f31-kube-api-access-xptp6\") pod \"0676218c-07b4-489c-8f1e-b40ae63d1f31\" (UID: \"0676218c-07b4-489c-8f1e-b40ae63d1f31\") "
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.068317    6342 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/0676218c-07b4-489c-8f1e-b40ae63d1f31-pvc-6972276c-f959-4556-a31c-2ace5c5cdae2\") pod \"0676218c-07b4-489c-8f1e-b40ae63d1f31\" (UID: \"0676218c-07b4-489c-8f1e-b40ae63d1f31\") "
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.068361    6342 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0676218c-07b4-489c-8f1e-b40ae63d1f31-pvc-6972276c-f959-4556-a31c-2ace5c5cdae2" (OuterVolumeSpecName: "mypd") pod "0676218c-07b4-489c-8f1e-b40ae63d1f31" (UID: "0676218c-07b4-489c-8f1e-b40ae63d1f31"). InnerVolumeSpecName "pvc-6972276c-f959-4556-a31c-2ace5c5cdae2". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.069419    6342 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0676218c-07b4-489c-8f1e-b40ae63d1f31-kube-api-access-xptp6" (OuterVolumeSpecName: "kube-api-access-xptp6") pod "0676218c-07b4-489c-8f1e-b40ae63d1f31" (UID: "0676218c-07b4-489c-8f1e-b40ae63d1f31"). InnerVolumeSpecName "kube-api-access-xptp6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.168839    6342 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xptp6\" (UniqueName: \"kubernetes.io/projected/0676218c-07b4-489c-8f1e-b40ae63d1f31-kube-api-access-xptp6\") on node \"functional-513000\" DevicePath \"\""
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.168863    6342 reconciler_common.go:288] "Volume detached for volume \"pvc-6972276c-f959-4556-a31c-2ace5c5cdae2\" (UniqueName: \"kubernetes.io/host-path/0676218c-07b4-489c-8f1e-b40ae63d1f31-pvc-6972276c-f959-4556-a31c-2ace5c5cdae2\") on node \"functional-513000\" DevicePath \"\""
	Sep 27 17:14:46 functional-513000 kubelet[6342]: E0927 17:14:46.357359    6342 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0676218c-07b4-489c-8f1e-b40ae63d1f31" containerName="myfrontend"
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.357392    6342 memory_manager.go:354] "RemoveStaleState removing state" podUID="0676218c-07b4-489c-8f1e-b40ae63d1f31" containerName="myfrontend"
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.472471    6342 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99c8r\" (UniqueName: \"kubernetes.io/projected/802f7276-3e0c-4633-8d47-79009353d78a-kube-api-access-99c8r\") pod \"sp-pod\" (UID: \"802f7276-3e0c-4633-8d47-79009353d78a\") " pod="default/sp-pod"
	Sep 27 17:14:46 functional-513000 kubelet[6342]: I0927 17:14:46.472527    6342 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6972276c-f959-4556-a31c-2ace5c5cdae2\" (UniqueName: \"kubernetes.io/host-path/802f7276-3e0c-4633-8d47-79009353d78a-pvc-6972276c-f959-4556-a31c-2ace5c5cdae2\") pod \"sp-pod\" (UID: \"802f7276-3e0c-4633-8d47-79009353d78a\") " pod="default/sp-pod"
	Sep 27 17:14:47 functional-513000 kubelet[6342]: I0927 17:14:47.042250    6342 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0676218c-07b4-489c-8f1e-b40ae63d1f31" path="/var/lib/kubelet/pods/0676218c-07b4-489c-8f1e-b40ae63d1f31/volumes"
	Sep 27 17:14:48 functional-513000 kubelet[6342]: I0927 17:14:48.028619    6342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.26901223 podStartE2EDuration="2.028592287s" podCreationTimestamp="2024-09-27 17:14:46 +0000 UTC" firstStartedPulling="2024-09-27 17:14:46.761501691 +0000 UTC m=+63.807343204" lastFinishedPulling="2024-09-27 17:14:47.521081706 +0000 UTC m=+64.566923261" observedRunningTime="2024-09-27 17:14:48.028376999 +0000 UTC m=+65.074218554" watchObservedRunningTime="2024-09-27 17:14:48.028592287 +0000 UTC m=+65.074433842"
	
	
	==> storage-provisioner [7b89e8836471] <==
	I0927 17:14:02.127745       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 17:14:02.131325       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 17:14:02.131342       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 17:14:19.549860       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 17:14:19.550149       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-513000_ae77cc3e-f47b-45c5-a09b-1b41849c3db1!
	I0927 17:14:19.550681       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8ce03e0-176a-4faa-8157-808568fe786b", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-513000_ae77cc3e-f47b-45c5-a09b-1b41849c3db1 became leader
	I0927 17:14:19.650705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-513000_ae77cc3e-f47b-45c5-a09b-1b41849c3db1!
	I0927 17:14:33.472627       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0927 17:14:33.472704       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    b95aac18-2e0a-4348-8485-59154b1791df 349 0 2024-09-27 17:12:07 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-27 17:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-6972276c-f959-4556-a31c-2ace5c5cdae2 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  6972276c-f959-4556-a31c-2ace5c5cdae2 718 0 2024-09-27 17:14:33 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-27 17:14:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-27 17:14:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0927 17:14:33.473052       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-6972276c-f959-4556-a31c-2ace5c5cdae2" provisioned
	I0927 17:14:33.473070       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0927 17:14:33.473120       1 volume_store.go:212] Trying to save persistentvolume "pvc-6972276c-f959-4556-a31c-2ace5c5cdae2"
	I0927 17:14:33.473901       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"6972276c-f959-4556-a31c-2ace5c5cdae2", APIVersion:"v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0927 17:14:33.478017       1 volume_store.go:219] persistentvolume "pvc-6972276c-f959-4556-a31c-2ace5c5cdae2" saved
	I0927 17:14:33.478230       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"6972276c-f959-4556-a31c-2ace5c5cdae2", APIVersion:"v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-6972276c-f959-4556-a31c-2ace5c5cdae2
	
	
	==> storage-provisioner [a6d95094fed5] <==
	I0927 17:13:46.476191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0927 17:13:46.476713       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-513000 -n functional-513000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-513000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (41.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 node stop m02 -v=7 --alsologtostderr
E0927 10:19:21.408345    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:22.820791    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-500000 node stop m02 -v=7 --alsologtostderr: (12.192460542s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
E0927 10:19:33.064193    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:53.546148    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: (25.962524542s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 3 (25.980587875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 10:20:21.938353    3674 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0927 10:20:21.938362    3674 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0927 10:20:34.509057    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.974029417s)
ha_test.go:413: expected profile "ha-500000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 3 (25.958471292s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 10:21:13.869456    3684 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0927 10:21:13.869484    3684 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (82.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.10099575s)

                                                
                                                
-- stdout --
	* Starting "ha-500000-m02" control-plane node in "ha-500000" cluster
	* Restarting existing qemu2 VM for "ha-500000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-500000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:21:13.917925    3690 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:21:13.918212    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:21:13.918216    3690 out.go:358] Setting ErrFile to fd 2...
	I0927 10:21:13.918219    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:21:13.918361    3690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:21:13.918638    3690 mustload.go:65] Loading cluster: ha-500000
	I0927 10:21:13.918899    3690 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0927 10:21:13.919185    3690 host.go:58] "ha-500000-m02" host status: Stopped
	I0927 10:21:13.923688    3690 out.go:177] * Starting "ha-500000-m02" control-plane node in "ha-500000" cluster
	I0927 10:21:13.928676    3690 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:21:13.928690    3690 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:21:13.928699    3690 cache.go:56] Caching tarball of preloaded images
	I0927 10:21:13.928774    3690 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:21:13.928781    3690 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:21:13.928839    3690 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/ha-500000/config.json ...
	I0927 10:21:13.929464    3690 start.go:360] acquireMachinesLock for ha-500000-m02: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:21:13.929533    3690 start.go:364] duration metric: took 33.917µs to acquireMachinesLock for "ha-500000-m02"
	I0927 10:21:13.929543    3690 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:21:13.929554    3690 fix.go:54] fixHost starting: m02
	I0927 10:21:13.929666    3690 fix.go:112] recreateIfNeeded on ha-500000-m02: state=Stopped err=<nil>
	W0927 10:21:13.929672    3690 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:21:13.933635    3690 out.go:177] * Restarting existing qemu2 VM for "ha-500000-m02" ...
	I0927 10:21:13.937642    3690 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:21:13.937683    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:01:82:8d:82:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/disk.qcow2
	I0927 10:21:13.940722    3690 main.go:141] libmachine: STDOUT: 
	I0927 10:21:13.940740    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:21:13.940776    3690 fix.go:56] duration metric: took 11.224417ms for fixHost
	I0927 10:21:13.940793    3690 start.go:83] releasing machines lock for "ha-500000-m02", held for 11.242292ms
	W0927 10:21:13.940799    3690 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:21:13.940830    3690 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:21:13.940834    3690 start.go:729] Will try again in 5 seconds ...
	I0927 10:21:18.942878    3690 start.go:360] acquireMachinesLock for ha-500000-m02: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:21:18.943065    3690 start.go:364] duration metric: took 138.541µs to acquireMachinesLock for "ha-500000-m02"
	I0927 10:21:18.943105    3690 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:21:18.943113    3690 fix.go:54] fixHost starting: m02
	I0927 10:21:18.943304    3690 fix.go:112] recreateIfNeeded on ha-500000-m02: state=Stopped err=<nil>
	W0927 10:21:18.943310    3690 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:21:18.946946    3690 out.go:177] * Restarting existing qemu2 VM for "ha-500000-m02" ...
	I0927 10:21:18.951072    3690 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:21:18.951117    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:01:82:8d:82:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/disk.qcow2
	I0927 10:21:18.953310    3690 main.go:141] libmachine: STDOUT: 
	I0927 10:21:18.953324    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:21:18.953347    3690 fix.go:56] duration metric: took 10.234708ms for fixHost
	I0927 10:21:18.953351    3690 start.go:83] releasing machines lock for "ha-500000-m02", held for 10.278125ms
	W0927 10:21:18.953388    3690 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:21:18.957093    3690 out.go:201] 
	W0927 10:21:18.961022    3690 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:21:18.961027    3690 out.go:270] * 
	* 
	W0927 10:21:18.962762    3690 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:21:18.967150    3690 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0927 10:21:13.917925    3690 out.go:345] Setting OutFile to fd 1 ...
I0927 10:21:13.918212    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:21:13.918216    3690 out.go:358] Setting ErrFile to fd 2...
I0927 10:21:13.918219    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:21:13.918361    3690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
I0927 10:21:13.918638    3690 mustload.go:65] Loading cluster: ha-500000
I0927 10:21:13.918899    3690 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0927 10:21:13.919185    3690 host.go:58] "ha-500000-m02" host status: Stopped
I0927 10:21:13.923688    3690 out.go:177] * Starting "ha-500000-m02" control-plane node in "ha-500000" cluster
I0927 10:21:13.928676    3690 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 10:21:13.928690    3690 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0927 10:21:13.928699    3690 cache.go:56] Caching tarball of preloaded images
I0927 10:21:13.928774    3690 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0927 10:21:13.928781    3690 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0927 10:21:13.928839    3690 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/ha-500000/config.json ...
I0927 10:21:13.929464    3690 start.go:360] acquireMachinesLock for ha-500000-m02: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0927 10:21:13.929533    3690 start.go:364] duration metric: took 33.917µs to acquireMachinesLock for "ha-500000-m02"
I0927 10:21:13.929543    3690 start.go:96] Skipping create...Using existing machine configuration
I0927 10:21:13.929554    3690 fix.go:54] fixHost starting: m02
I0927 10:21:13.929666    3690 fix.go:112] recreateIfNeeded on ha-500000-m02: state=Stopped err=<nil>
W0927 10:21:13.929672    3690 fix.go:138] unexpected machine state, will restart: <nil>
I0927 10:21:13.933635    3690 out.go:177] * Restarting existing qemu2 VM for "ha-500000-m02" ...
I0927 10:21:13.937642    3690 qemu.go:418] Using hvf for hardware acceleration
I0927 10:21:13.937683    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:01:82:8d:82:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/disk.qcow2
I0927 10:21:13.940722    3690 main.go:141] libmachine: STDOUT: 
I0927 10:21:13.940740    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0927 10:21:13.940776    3690 fix.go:56] duration metric: took 11.224417ms for fixHost
I0927 10:21:13.940793    3690 start.go:83] releasing machines lock for "ha-500000-m02", held for 11.242292ms
W0927 10:21:13.940799    3690 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0927 10:21:13.940830    3690 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0927 10:21:13.940834    3690 start.go:729] Will try again in 5 seconds ...
I0927 10:21:18.942878    3690 start.go:360] acquireMachinesLock for ha-500000-m02: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0927 10:21:18.943065    3690 start.go:364] duration metric: took 138.541µs to acquireMachinesLock for "ha-500000-m02"
I0927 10:21:18.943105    3690 start.go:96] Skipping create...Using existing machine configuration
I0927 10:21:18.943113    3690 fix.go:54] fixHost starting: m02
I0927 10:21:18.943304    3690 fix.go:112] recreateIfNeeded on ha-500000-m02: state=Stopped err=<nil>
W0927 10:21:18.943310    3690 fix.go:138] unexpected machine state, will restart: <nil>
I0927 10:21:18.946946    3690 out.go:177] * Restarting existing qemu2 VM for "ha-500000-m02" ...
I0927 10:21:18.951072    3690 qemu.go:418] Using hvf for hardware acceleration
I0927 10:21:18.951117    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:01:82:8d:82:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000-m02/disk.qcow2
I0927 10:21:18.953310    3690 main.go:141] libmachine: STDOUT: 
I0927 10:21:18.953324    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0927 10:21:18.953347    3690 fix.go:56] duration metric: took 10.234708ms for fixHost
I0927 10:21:18.953351    3690 start.go:83] releasing machines lock for "ha-500000-m02", held for 10.278125ms
W0927 10:21:18.953388    3690 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0927 10:21:18.957093    3690 out.go:201] 
W0927 10:21:18.961022    3690 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0927 10:21:18.961027    3690 out.go:270] * 
* 
W0927 10:21:18.962762    3690 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0927 10:21:18.967150    3690 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-500000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: (25.960028291s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0927 10:21:56.431210    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (25.961588583s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 3 (25.962517583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 10:22:36.855071    3710 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0927 10:22:36.855081    3710 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (82.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-500000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-500000 -v=7 --alsologtostderr
E0927 10:23:53.674413    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:24:12.549324    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:24:40.272493    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-500000 -v=7 --alsologtostderr: (3m49.010730291s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.23723025s)

                                                
                                                
-- stdout --
	* [ha-500000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:26:28.475250    3773 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:26:28.475459    3773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:26:28.475463    3773 out.go:358] Setting ErrFile to fd 2...
	I0927 10:26:28.475467    3773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:26:28.475642    3773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:26:28.476966    3773 out.go:352] Setting JSON to false
	I0927 10:26:28.497505    3773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3352,"bootTime":1727454636,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:26:28.497573    3773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:26:28.502906    3773 out.go:177] * [ha-500000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:26:28.510873    3773 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:26:28.510917    3773 notify.go:220] Checking for updates...
	I0927 10:26:28.518848    3773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:26:28.522806    3773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:26:28.525822    3773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:26:28.528854    3773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:26:28.531679    3773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:26:28.535208    3773 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:26:28.535256    3773 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:26:28.539774    3773 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:26:28.546845    3773 start.go:297] selected driver: qemu2
	I0927 10:26:28.546854    3773 start.go:901] validating driver "qemu2" against &{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:26:28.546951    3773 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:26:28.549815    3773 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:26:28.549843    3773 cni.go:84] Creating CNI manager for ""
	I0927 10:26:28.549874    3773 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 10:26:28.549931    3773 start.go:340] cluster config:
	{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:26:28.554358    3773 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:26:28.562767    3773 out.go:177] * Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	I0927 10:26:28.566848    3773 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:26:28.566864    3773 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:26:28.566873    3773 cache.go:56] Caching tarball of preloaded images
	I0927 10:26:28.566959    3773 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:26:28.566966    3773 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:26:28.567040    3773 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/ha-500000/config.json ...
	I0927 10:26:28.567549    3773 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:26:28.567586    3773 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "ha-500000"
	I0927 10:26:28.567595    3773 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:26:28.567600    3773 fix.go:54] fixHost starting: 
	I0927 10:26:28.567736    3773 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W0927 10:26:28.567745    3773 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:26:28.571783    3773 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I0927 10:26:28.579625    3773 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:26:28.579661    3773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b5:49:5a:5a:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/disk.qcow2
	I0927 10:26:28.581762    3773 main.go:141] libmachine: STDOUT: 
	I0927 10:26:28.581781    3773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:26:28.581812    3773 fix.go:56] duration metric: took 14.209833ms for fixHost
	I0927 10:26:28.581816    3773 start.go:83] releasing machines lock for "ha-500000", held for 14.225917ms
	W0927 10:26:28.581824    3773 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:26:28.581862    3773 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:26:28.581867    3773 start.go:729] Will try again in 5 seconds ...
	I0927 10:26:33.584030    3773 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:26:33.584453    3773 start.go:364] duration metric: took 331.416µs to acquireMachinesLock for "ha-500000"
	I0927 10:26:33.584602    3773 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:26:33.584622    3773 fix.go:54] fixHost starting: 
	I0927 10:26:33.585383    3773 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W0927 10:26:33.585414    3773 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:26:33.593896    3773 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I0927 10:26:33.597914    3773 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:26:33.598154    3773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b5:49:5a:5a:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/disk.qcow2
	I0927 10:26:33.607255    3773 main.go:141] libmachine: STDOUT: 
	I0927 10:26:33.607311    3773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:26:33.607390    3773 fix.go:56] duration metric: took 22.768041ms for fixHost
	I0927 10:26:33.607409    3773 start.go:83] releasing machines lock for "ha-500000", held for 22.927458ms
	W0927 10:26:33.607586    3773 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:26:33.614845    3773 out.go:201] 
	W0927 10:26:33.618940    3773 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:26:33.618966    3773 out.go:270] * 
	* 
	W0927 10:26:33.621791    3773 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:26:33.628909    3773 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-500000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-500000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (32.899542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.546792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-500000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:26:33.772757    3788 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:26:33.773242    3788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:26:33.773247    3788 out.go:358] Setting ErrFile to fd 2...
	I0927 10:26:33.773249    3788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:26:33.773450    3788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:26:33.773873    3788 mustload.go:65] Loading cluster: ha-500000
	I0927 10:26:33.774126    3788 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0927 10:26:33.774446    3788 out.go:270] ! The control-plane node ha-500000 host is not running (will try others): state=Stopped
	! The control-plane node ha-500000 host is not running (will try others): state=Stopped
	W0927 10:26:33.774551    3788 out.go:270] ! The control-plane node ha-500000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-500000-m02 host is not running (will try others): state=Stopped
	I0927 10:26:33.779023    3788 out.go:177] * The control-plane node ha-500000-m03 host is not running: state=Stopped
	I0927 10:26:33.782065    3788 out.go:177]   To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-500000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (30.480166ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:26:33.814280    3790 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:26:33.814426    3790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:26:33.814430    3790 out.go:358] Setting ErrFile to fd 2...
	I0927 10:26:33.814432    3790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:26:33.814581    3790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:26:33.814712    3790 out.go:352] Setting JSON to false
	I0927 10:26:33.814723    3790 mustload.go:65] Loading cluster: ha-500000
	I0927 10:26:33.814790    3790 notify.go:220] Checking for updates...
	I0927 10:26:33.814952    3790 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:26:33.814962    3790 status.go:174] checking status of ha-500000 ...
	I0927 10:26:33.815203    3790 status.go:364] ha-500000 host status = "Stopped" (err=<nil>)
	I0927 10:26:33.815207    3790 status.go:377] host is not running, skipping remaining checks
	I0927 10:26:33.815208    3790 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 10:26:33.815219    3790 status.go:174] checking status of ha-500000-m02 ...
	I0927 10:26:33.815306    3790 status.go:364] ha-500000-m02 host status = "Stopped" (err=<nil>)
	I0927 10:26:33.815309    3790 status.go:377] host is not running, skipping remaining checks
	I0927 10:26:33.815310    3790 status.go:176] ha-500000-m02 status: &{Name:ha-500000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 10:26:33.815314    3790 status.go:174] checking status of ha-500000-m03 ...
	I0927 10:26:33.815402    3790 status.go:364] ha-500000-m03 host status = "Stopped" (err=<nil>)
	I0927 10:26:33.815404    3790 status.go:377] host is not running, skipping remaining checks
	I0927 10:26:33.815406    3790 status.go:176] ha-500000-m03 status: &{Name:ha-500000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 10:26:33.815409    3790 status.go:174] checking status of ha-500000-m04 ...
	I0927 10:26:33.815501    3790 status.go:364] ha-500000-m04 host status = "Stopped" (err=<nil>)
	I0927 10:26:33.815503    3790 status.go:377] host is not running, skipping remaining checks
	I0927 10:26:33.815505    3790 status.go:176] ha-500000-m04 status: &{Name:ha-500000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (30.277416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-500000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (30.111208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 stop -v=7 --alsologtostderr
E0927 10:28:53.669671    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:29:12.545176    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-500000 stop -v=7 --alsologtostderr: (3m21.969017042s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (65.797708ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:29:55.952220    3845 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:29:55.952426    3845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:29:55.952431    3845 out.go:358] Setting ErrFile to fd 2...
	I0927 10:29:55.952434    3845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:29:55.952593    3845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:29:55.952767    3845 out.go:352] Setting JSON to false
	I0927 10:29:55.952782    3845 mustload.go:65] Loading cluster: ha-500000
	I0927 10:29:55.952822    3845 notify.go:220] Checking for updates...
	I0927 10:29:55.953092    3845 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:29:55.953105    3845 status.go:174] checking status of ha-500000 ...
	I0927 10:29:55.953451    3845 status.go:364] ha-500000 host status = "Stopped" (err=<nil>)
	I0927 10:29:55.953456    3845 status.go:377] host is not running, skipping remaining checks
	I0927 10:29:55.953459    3845 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 10:29:55.953472    3845 status.go:174] checking status of ha-500000-m02 ...
	I0927 10:29:55.953608    3845 status.go:364] ha-500000-m02 host status = "Stopped" (err=<nil>)
	I0927 10:29:55.953612    3845 status.go:377] host is not running, skipping remaining checks
	I0927 10:29:55.953614    3845 status.go:176] ha-500000-m02 status: &{Name:ha-500000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 10:29:55.953619    3845 status.go:174] checking status of ha-500000-m03 ...
	I0927 10:29:55.953744    3845 status.go:364] ha-500000-m03 host status = "Stopped" (err=<nil>)
	I0927 10:29:55.953748    3845 status.go:377] host is not running, skipping remaining checks
	I0927 10:29:55.953750    3845 status.go:176] ha-500000-m03 status: &{Name:ha-500000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 10:29:55.953755    3845 status.go:174] checking status of ha-500000-m04 ...
	I0927 10:29:55.953879    3845 status.go:364] ha-500000-m04 host status = "Stopped" (err=<nil>)
	I0927 10:29:55.953883    3845 status.go:377] host is not running, skipping remaining checks
	I0927 10:29:55.953885    3845 status.go:176] ha-500000-m04 status: &{Name:ha-500000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-500000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (33.39ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.171728833s)

                                                
                                                
-- stdout --
	* [ha-500000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:29:56.016470    3849 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:29:56.016600    3849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:29:56.016603    3849 out.go:358] Setting ErrFile to fd 2...
	I0927 10:29:56.016606    3849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:29:56.016743    3849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:29:56.017951    3849 out.go:352] Setting JSON to false
	I0927 10:29:56.033866    3849 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3560,"bootTime":1727454636,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:29:56.033938    3849 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:29:56.038906    3849 out.go:177] * [ha-500000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:29:56.045871    3849 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:29:56.045941    3849 notify.go:220] Checking for updates...
	I0927 10:29:56.052746    3849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:29:56.055835    3849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:29:56.058828    3849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:29:56.060217    3849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:29:56.062840    3849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:29:56.066198    3849 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:29:56.066451    3849 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:29:56.070652    3849 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:29:56.077825    3849 start.go:297] selected driver: qemu2
	I0927 10:29:56.077833    3849 start.go:901] validating driver "qemu2" against &{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:29:56.077927    3849 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:29:56.080251    3849 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:29:56.080277    3849 cni.go:84] Creating CNI manager for ""
	I0927 10:29:56.080296    3849 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 10:29:56.080344    3849 start.go:340] cluster config:
	{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:29:56.083712    3849 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:29:56.091843    3849 out.go:177] * Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	I0927 10:29:56.095809    3849 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:29:56.095825    3849 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:29:56.095838    3849 cache.go:56] Caching tarball of preloaded images
	I0927 10:29:56.095917    3849 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:29:56.095923    3849 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:29:56.095998    3849 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/ha-500000/config.json ...
	I0927 10:29:56.096443    3849 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:29:56.096478    3849 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "ha-500000"
	I0927 10:29:56.096487    3849 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:29:56.096493    3849 fix.go:54] fixHost starting: 
	I0927 10:29:56.096611    3849 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W0927 10:29:56.096621    3849 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:29:56.100876    3849 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I0927 10:29:56.108729    3849 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:29:56.108762    3849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b5:49:5a:5a:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/disk.qcow2
	I0927 10:29:56.110730    3849 main.go:141] libmachine: STDOUT: 
	I0927 10:29:56.110749    3849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:29:56.110779    3849 fix.go:56] duration metric: took 14.287291ms for fixHost
	I0927 10:29:56.110783    3849 start.go:83] releasing machines lock for "ha-500000", held for 14.3005ms
	W0927 10:29:56.110789    3849 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:29:56.110829    3849 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:29:56.110833    3849 start.go:729] Will try again in 5 seconds ...
	I0927 10:30:01.112962    3849 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:30:01.113316    3849 start.go:364] duration metric: took 277.041µs to acquireMachinesLock for "ha-500000"
	I0927 10:30:01.113695    3849 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:30:01.113708    3849 fix.go:54] fixHost starting: 
	I0927 10:30:01.114157    3849 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W0927 10:30:01.114171    3849 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:30:01.118676    3849 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I0927 10:30:01.128793    3849 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:30:01.128962    3849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b5:49:5a:5a:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/ha-500000/disk.qcow2
	I0927 10:30:01.135594    3849 main.go:141] libmachine: STDOUT: 
	I0927 10:30:01.135639    3849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:30:01.135702    3849 fix.go:56] duration metric: took 21.993959ms for fixHost
	I0927 10:30:01.135715    3849 start.go:83] releasing machines lock for "ha-500000", held for 22.385417ms
	W0927 10:30:01.135917    3849 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:30:01.143725    3849 out.go:201] 
	W0927 10:30:01.147735    3849 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:30:01.147757    3849 out.go:270] * 
	* 
	W0927 10:30:01.149109    3849 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:30:01.154691    3849 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (65.459375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-500000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (28.779584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-500000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-500000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.694ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-500000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:30:01.334204    4014 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:30:01.334355    4014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:30:01.334358    4014 out.go:358] Setting ErrFile to fd 2...
	I0927 10:30:01.334360    4014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:30:01.334499    4014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:30:01.334711    4014 mustload.go:65] Loading cluster: ha-500000
	I0927 10:30:01.334941    4014 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0927 10:30:01.335239    4014 out.go:270] ! The control-plane node ha-500000 host is not running (will try others): state=Stopped
	! The control-plane node ha-500000 host is not running (will try others): state=Stopped
	W0927 10:30:01.335336    4014 out.go:270] ! The control-plane node ha-500000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-500000-m02 host is not running (will try others): state=Stopped
	I0927 10:30:01.338641    4014 out.go:177] * The control-plane node ha-500000-m03 host is not running: state=Stopped
	I0927 10:30:01.342626    4014 out.go:177]   To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-500000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (29.077125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-593000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-593000 --driver=qemu2 : exit status 80 (9.983182084s)

                                                
                                                
-- stdout --
	* [image-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-593000" primary control-plane node in "image-593000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-593000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-593000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-593000 -n image-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-593000 -n image-593000: exit status 7 (68.560208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (10.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-440000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0927 10:30:16.761325    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-440000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (10.012581833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8c504e78-9a54-4f8f-9b24-c6259c9bf146","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-440000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4512cbb5-c684-412e-9663-32e455a7a061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"ef858482-db8c-4c8a-a7c8-ac4abac7ba9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig"}}
	{"specversion":"1.0","id":"7066c08a-0c01-40c7-b791-3c8e4aac25a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"be7c6ce4-ce4b-4745-9943-39f92378aad5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"93f3ea6d-e65e-4d34-a570-23a50c3c3abe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube"}}
	{"specversion":"1.0","id":"48ec25b7-eb28-4c6d-a988-8f486f4a8fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3cda850b-1cd0-45e7-856c-c8fdabc0a7c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"16517280-9c75-412a-88b1-ac30732bf452","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"053ef58f-0188-47c6-aa45-b1e344bdca29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-440000\" primary control-plane node in \"json-output-440000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"741c8514-ea8f-48cb-9d42-aca03e2729bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2eed4757-eff9-48bf-aa9e-a55547e5a58c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-440000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"741b9790-0374-4888-a9b0-f8ecae8ad8a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"152b7ec5-d5e8-4c3e-b138-08aecebabfb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"87057f34-cfe1-4aed-ad4c-bef003e30cc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-440000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1e93ff42-6bb7-446e-be32-67bc57381b71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"964c3779-4ccd-4321-b460-2d1922761d3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-440000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (10.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-440000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-440000 --output=json --user=testUser: exit status 83 (78.185708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"129243b4-99d3-41c9-8454-61229aa6e047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-440000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"9012acde-c038-4bfe-9387-282900b0b2bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-440000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-440000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-440000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-440000 --output=json --user=testUser: exit status 83 (42.8045ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-440000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-440000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-440000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-440000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-452000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-452000 --driver=qemu2 : exit status 80 (10.006461875s)

                                                
                                                
-- stdout --
	* [first-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-452000" primary control-plane node in "first-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-452000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-27 10:30:34.614357 -0700 PDT m=+2135.937560710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-453000 -n second-453000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-453000 -n second-453000: exit status 85 (80.275042ms)

                                                
                                                
-- stdout --
	* Profile "second-453000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-453000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-453000" host is not running, skipping log retrieval (state="* Profile \"second-453000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-453000\"")
helpers_test.go:175: Cleaning up "second-453000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-453000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-27 10:30:34.801881 -0700 PDT m=+2136.125087335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-452000 -n first-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-452000 -n first-452000: exit status 7 (30.051125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-452000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-452000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-452000
--- FAIL: TestMinikubeProfile (10.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-713000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-713000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.088347875s)

                                                
                                                
-- stdout --
	* [mount-start-1-713000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-713000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-713000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-713000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-713000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-713000 -n mount-start-1-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-713000 -n mount-start-1-713000: exit status 7 (67.061958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-874000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-874000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.969782125s)

                                                
                                                
-- stdout --
	* [multinode-874000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-874000" primary control-plane node in "multinode-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:30:45.277537    4325 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:30:45.277673    4325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:30:45.277677    4325 out.go:358] Setting ErrFile to fd 2...
	I0927 10:30:45.277680    4325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:30:45.277807    4325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:30:45.278890    4325 out.go:352] Setting JSON to false
	I0927 10:30:45.295053    4325 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3609,"bootTime":1727454636,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:30:45.295127    4325 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:30:45.302166    4325 out.go:177] * [multinode-874000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:30:45.310127    4325 notify.go:220] Checking for updates...
	I0927 10:30:45.315107    4325 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:30:45.325029    4325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:30:45.333044    4325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:30:45.337036    4325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:30:45.341070    4325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:30:45.344091    4325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:30:45.348155    4325 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:30:45.353032    4325 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:30:45.360063    4325 start.go:297] selected driver: qemu2
	I0927 10:30:45.360069    4325 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:30:45.360075    4325 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:30:45.362509    4325 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:30:45.365252    4325 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:30:45.368114    4325 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:30:45.368145    4325 cni.go:84] Creating CNI manager for ""
	I0927 10:30:45.368166    4325 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0927 10:30:45.368171    4325 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 10:30:45.368196    4325 start.go:340] cluster config:
	{Name:multinode-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:30:45.372077    4325 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:30:45.378075    4325 out.go:177] * Starting "multinode-874000" primary control-plane node in "multinode-874000" cluster
	I0927 10:30:45.382008    4325 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:30:45.382023    4325 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:30:45.382032    4325 cache.go:56] Caching tarball of preloaded images
	I0927 10:30:45.382098    4325 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:30:45.382104    4325 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:30:45.382329    4325 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/multinode-874000/config.json ...
	I0927 10:30:45.382341    4325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/multinode-874000/config.json: {Name:mk57379fd412069deabf6c8896f9ccf00cf58e6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:30:45.382585    4325 start.go:360] acquireMachinesLock for multinode-874000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:30:45.382625    4325 start.go:364] duration metric: took 34.041µs to acquireMachinesLock for "multinode-874000"
	I0927 10:30:45.382637    4325 start.go:93] Provisioning new machine with config: &{Name:multinode-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:30:45.382696    4325 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:30:45.392040    4325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:30:45.411035    4325 start.go:159] libmachine.API.Create for "multinode-874000" (driver="qemu2")
	I0927 10:30:45.411071    4325 client.go:168] LocalClient.Create starting
	I0927 10:30:45.411134    4325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:30:45.411164    4325 main.go:141] libmachine: Decoding PEM data...
	I0927 10:30:45.411174    4325 main.go:141] libmachine: Parsing certificate...
	I0927 10:30:45.411210    4325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:30:45.411238    4325 main.go:141] libmachine: Decoding PEM data...
	I0927 10:30:45.411247    4325 main.go:141] libmachine: Parsing certificate...
	I0927 10:30:45.411675    4325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:30:45.572091    4325 main.go:141] libmachine: Creating SSH key...
	I0927 10:30:45.751994    4325 main.go:141] libmachine: Creating Disk image...
	I0927 10:30:45.752001    4325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:30:45.752220    4325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:30:45.761617    4325 main.go:141] libmachine: STDOUT: 
	I0927 10:30:45.761639    4325 main.go:141] libmachine: STDERR: 
	I0927 10:30:45.761696    4325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2 +20000M
	I0927 10:30:45.769552    4325 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:30:45.769567    4325 main.go:141] libmachine: STDERR: 
	I0927 10:30:45.769580    4325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:30:45.769584    4325 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:30:45.769595    4325 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:30:45.769627    4325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:15:92:8a:74:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:30:45.771228    4325 main.go:141] libmachine: STDOUT: 
	I0927 10:30:45.771242    4325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:30:45.771259    4325 client.go:171] duration metric: took 360.188958ms to LocalClient.Create
	I0927 10:30:47.773404    4325 start.go:128] duration metric: took 2.3907265s to createHost
	I0927 10:30:47.773469    4325 start.go:83] releasing machines lock for "multinode-874000", held for 2.390875s
	W0927 10:30:47.773536    4325 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:30:47.778278    4325 out.go:177] * Deleting "multinode-874000" in qemu2 ...
	W0927 10:30:47.820327    4325 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:30:47.820350    4325 start.go:729] Will try again in 5 seconds ...
	I0927 10:30:52.822558    4325 start.go:360] acquireMachinesLock for multinode-874000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:30:52.823023    4325 start.go:364] duration metric: took 362.917µs to acquireMachinesLock for "multinode-874000"
	I0927 10:30:52.823166    4325 start.go:93] Provisioning new machine with config: &{Name:multinode-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:30:52.823509    4325 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:30:52.830333    4325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:30:52.881071    4325 start.go:159] libmachine.API.Create for "multinode-874000" (driver="qemu2")
	I0927 10:30:52.881137    4325 client.go:168] LocalClient.Create starting
	I0927 10:30:52.881273    4325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:30:52.881336    4325 main.go:141] libmachine: Decoding PEM data...
	I0927 10:30:52.881351    4325 main.go:141] libmachine: Parsing certificate...
	I0927 10:30:52.881440    4325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:30:52.881490    4325 main.go:141] libmachine: Decoding PEM data...
	I0927 10:30:52.881502    4325 main.go:141] libmachine: Parsing certificate...
	I0927 10:30:52.882018    4325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:30:53.056280    4325 main.go:141] libmachine: Creating SSH key...
	I0927 10:30:53.143686    4325 main.go:141] libmachine: Creating Disk image...
	I0927 10:30:53.143693    4325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:30:53.143888    4325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:30:53.153816    4325 main.go:141] libmachine: STDOUT: 
	I0927 10:30:53.153831    4325 main.go:141] libmachine: STDERR: 
	I0927 10:30:53.153913    4325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2 +20000M
	I0927 10:30:53.162077    4325 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:30:53.162092    4325 main.go:141] libmachine: STDERR: 
	I0927 10:30:53.162102    4325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:30:53.162108    4325 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:30:53.162117    4325 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:30:53.162148    4325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:21:33:7a:a5:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:30:53.163736    4325 main.go:141] libmachine: STDOUT: 
	I0927 10:30:53.163749    4325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:30:53.163761    4325 client.go:171] duration metric: took 282.62275ms to LocalClient.Create
	I0927 10:30:55.165904    4325 start.go:128] duration metric: took 2.342405834s to createHost
	I0927 10:30:55.165979    4325 start.go:83] releasing machines lock for "multinode-874000", held for 2.342971083s
	W0927 10:30:55.166384    4325 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:30:55.183080    4325 out.go:201] 
	W0927 10:30:55.188069    4325 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:30:55.188123    4325 out.go:270] * 
	* 
	W0927 10:30:55.190828    4325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:30:55.205036    4325 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-874000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (70.429125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (114.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (123.958084ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-874000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- rollout status deployment/busybox: exit status 1 (58.395292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.635541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:30:55.532006    2039 retry.go:31] will retry after 1.2492924s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.589666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:30:56.889218    2039 retry.go:31] will retry after 1.170723412s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.15025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:30:58.164404    2039 retry.go:31] will retry after 2.701355639s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.681958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:31:00.973835    2039 retry.go:31] will retry after 2.240855699s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.882417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:31:03.319949    2039 retry.go:31] will retry after 5.729087085s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.052584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:31:09.158463    2039 retry.go:31] will retry after 11.019959947s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.194875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:31:20.263571    2039 retry.go:31] will retry after 9.083944241s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.307291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:31:29.442970    2039 retry.go:31] will retry after 23.131053692s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.782833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:31:52.668182    2039 retry.go:31] will retry after 20.286740356s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.449333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0927 10:32:13.058175    2039 retry.go:31] will retry after 36.429971479s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.748709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.262333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.431167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.369125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.314958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (29.922875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (114.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-874000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.688375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (30.190458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-874000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-874000 -v 3 --alsologtostderr: exit status 83 (43.451833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-874000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-874000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:49.967132    4432 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:49.967285    4432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:49.967288    4432 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:49.967290    4432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:49.967413    4432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:49.967656    4432 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:49.967864    4432 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:49.971650    4432 out.go:177] * The control-plane node multinode-874000 host is not running: state=Stopped
	I0927 10:32:49.976567    4432 out.go:177]   To start a cluster, run: "minikube start -p multinode-874000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-874000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (30.114667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-874000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-874000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.184875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-874000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-874000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-874000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (29.092666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-874000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-874000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-874000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-874000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (29.788667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status --output json --alsologtostderr: exit status 7 (30.601459ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-874000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:50.175804    4446 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:50.175960    4446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.175965    4446 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:50.175967    4446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.176095    4446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:50.176473    4446 out.go:352] Setting JSON to true
	I0927 10:32:50.176490    4446 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:50.176761    4446 notify.go:220] Checking for updates...
	I0927 10:32:50.176882    4446 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:50.176910    4446 status.go:174] checking status of multinode-874000 ...
	I0927 10:32:50.177341    4446 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:32:50.177346    4446 status.go:377] host is not running, skipping remaining checks
	I0927 10:32:50.177348    4446 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-874000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (30.241125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 node stop m03: exit status 85 (47.145125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-874000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status: exit status 7 (30.257583ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr: exit status 7 (29.972ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:50.315079    4454 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:50.315228    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.315231    4454 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:50.315233    4454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.315362    4454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:50.315476    4454 out.go:352] Setting JSON to false
	I0927 10:32:50.315489    4454 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:50.315537    4454 notify.go:220] Checking for updates...
	I0927 10:32:50.315680    4454 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:50.315689    4454 status.go:174] checking status of multinode-874000 ...
	I0927 10:32:50.315929    4454 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:32:50.315933    4454 status.go:377] host is not running, skipping remaining checks
	I0927 10:32:50.315935    4454 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr": multinode-874000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (30.483542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.653084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:50.375546    4458 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:50.375787    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.375790    4458 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:50.375793    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.375936    4458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:50.376159    4458 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:50.376342    4458 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:50.380467    4458 out.go:201] 
	W0927 10:32:50.383507    4458 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0927 10:32:50.383512    4458 out.go:270] * 
	* 
	W0927 10:32:50.385151    4458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:32:50.388461    4458 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0927 10:32:50.375546    4458 out.go:345] Setting OutFile to fd 1 ...
I0927 10:32:50.375787    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:32:50.375790    4458 out.go:358] Setting ErrFile to fd 2...
I0927 10:32:50.375793    4458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:32:50.375936    4458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
I0927 10:32:50.376159    4458 mustload.go:65] Loading cluster: multinode-874000
I0927 10:32:50.376342    4458 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:32:50.380467    4458 out.go:201] 
W0927 10:32:50.383507    4458 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0927 10:32:50.383512    4458 out.go:270] * 
* 
W0927 10:32:50.385151    4458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0927 10:32:50.388461    4458 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-874000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (30.042458ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:50.420847    4460 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:50.420989    4460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.420993    4460 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:50.420995    4460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:50.421150    4460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:50.421272    4460 out.go:352] Setting JSON to false
	I0927 10:32:50.421286    4460 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:50.421341    4460 notify.go:220] Checking for updates...
	I0927 10:32:50.421500    4460 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:50.421509    4460 status.go:174] checking status of multinode-874000 ...
	I0927 10:32:50.421742    4460 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:32:50.421745    4460 status.go:377] host is not running, skipping remaining checks
	I0927 10:32:50.421747    4460 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:32:50.422605    2039 retry.go:31] will retry after 1.405721566s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (73.547125ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:51.901990    4462 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:51.902227    4462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:51.902231    4462 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:51.902234    4462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:51.902390    4462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:51.902555    4462 out.go:352] Setting JSON to false
	I0927 10:32:51.902569    4462 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:51.902613    4462 notify.go:220] Checking for updates...
	I0927 10:32:51.902832    4462 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:51.902844    4462 status.go:174] checking status of multinode-874000 ...
	I0927 10:32:51.903159    4462 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:32:51.903164    4462 status.go:377] host is not running, skipping remaining checks
	I0927 10:32:51.903167    4462 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:32:51.904282    2039 retry.go:31] will retry after 2.156635997s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (72.684417ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:54.133717    4464 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:54.133897    4464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:54.133901    4464 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:54.133904    4464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:54.134058    4464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:54.134222    4464 out.go:352] Setting JSON to false
	I0927 10:32:54.134237    4464 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:54.134266    4464 notify.go:220] Checking for updates...
	I0927 10:32:54.134495    4464 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:54.134508    4464 status.go:174] checking status of multinode-874000 ...
	I0927 10:32:54.134812    4464 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:32:54.134817    4464 status.go:377] host is not running, skipping remaining checks
	I0927 10:32:54.134820    4464 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:32:54.135995    2039 retry.go:31] will retry after 2.805659938s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (75.645083ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:57.017496    4466 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:57.017691    4466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:57.017695    4466 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:57.017698    4466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:57.017881    4466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:57.018025    4466 out.go:352] Setting JSON to false
	I0927 10:32:57.018038    4466 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:57.018086    4466 notify.go:220] Checking for updates...
	I0927 10:32:57.018296    4466 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:57.018310    4466 status.go:174] checking status of multinode-874000 ...
	I0927 10:32:57.018621    4466 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:32:57.018626    4466 status.go:377] host is not running, skipping remaining checks
	I0927 10:32:57.018628    4466 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:32:57.019710    2039 retry.go:31] will retry after 2.77723494s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (72.706916ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:32:59.869801    4468 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:32:59.870001    4468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:59.870005    4468 out.go:358] Setting ErrFile to fd 2...
	I0927 10:32:59.870008    4468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:32:59.870190    4468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:32:59.870353    4468 out.go:352] Setting JSON to false
	I0927 10:32:59.870367    4468 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:32:59.870410    4468 notify.go:220] Checking for updates...
	I0927 10:32:59.870625    4468 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:32:59.870637    4468 status.go:174] checking status of multinode-874000 ...
	I0927 10:32:59.870951    4468 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:32:59.870956    4468 status.go:377] host is not running, skipping remaining checks
	I0927 10:32:59.870959    4468 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:32:59.871997    2039 retry.go:31] will retry after 4.814816194s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (73.891541ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:04.760008    4472 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:04.760236    4472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:04.760241    4472 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:04.760245    4472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:04.760445    4472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:04.760670    4472 out.go:352] Setting JSON to false
	I0927 10:33:04.760693    4472 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:33:04.760740    4472 notify.go:220] Checking for updates...
	I0927 10:33:04.761025    4472 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:04.761039    4472 status.go:174] checking status of multinode-874000 ...
	I0927 10:33:04.761378    4472 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:33:04.761383    4472 status.go:377] host is not running, skipping remaining checks
	I0927 10:33:04.761386    4472 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:33:04.762463    2039 retry.go:31] will retry after 9.639722668s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (72.4275ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:14.474150    4474 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:14.474345    4474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:14.474352    4474 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:14.474355    4474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:14.474541    4474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:14.474727    4474 out.go:352] Setting JSON to false
	I0927 10:33:14.474746    4474 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:33:14.474774    4474 notify.go:220] Checking for updates...
	I0927 10:33:14.475030    4474 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:14.475044    4474 status.go:174] checking status of multinode-874000 ...
	I0927 10:33:14.475359    4474 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:33:14.475364    4474 status.go:377] host is not running, skipping remaining checks
	I0927 10:33:14.475367    4474 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:33:14.476448    2039 retry.go:31] will retry after 9.119776926s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (71.262875ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:23.666366    4478 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:23.666566    4478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:23.666571    4478 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:23.666574    4478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:23.666730    4478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:23.666894    4478 out.go:352] Setting JSON to false
	I0927 10:33:23.666909    4478 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:33:23.666939    4478 notify.go:220] Checking for updates...
	I0927 10:33:23.667175    4478 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:23.667188    4478 status.go:174] checking status of multinode-874000 ...
	I0927 10:33:23.667493    4478 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:33:23.667498    4478 status.go:377] host is not running, skipping remaining checks
	I0927 10:33:23.667501    4478 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0927 10:33:23.668565    2039 retry.go:31] will retry after 12.244204788s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr: exit status 7 (73.286167ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:35.985869    4480 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:35.986066    4480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:35.986071    4480 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:35.986074    4480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:35.986249    4480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:35.986406    4480 out.go:352] Setting JSON to false
	I0927 10:33:35.986420    4480 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:33:35.986463    4480 notify.go:220] Checking for updates...
	I0927 10:33:35.986681    4480 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:35.986694    4480 status.go:174] checking status of multinode-874000 ...
	I0927 10:33:35.987005    4480 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:33:35.987010    4480 status.go:377] host is not running, skipping remaining checks
	I0927 10:33:35.987012    4480 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-874000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (32.6225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-874000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-874000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-874000: (3.17538625s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-874000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-874000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.227215875s)

                                                
                                                
-- stdout --
	* [multinode-874000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-874000" primary control-plane node in "multinode-874000" cluster
	* Restarting existing qemu2 VM for "multinode-874000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-874000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:39.287323    4504 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:39.287461    4504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:39.287465    4504 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:39.287468    4504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:39.287620    4504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:39.288750    4504 out.go:352] Setting JSON to false
	I0927 10:33:39.307650    4504 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3783,"bootTime":1727454636,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:33:39.307728    4504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:33:39.311791    4504 out.go:177] * [multinode-874000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:33:39.318716    4504 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:33:39.318757    4504 notify.go:220] Checking for updates...
	I0927 10:33:39.325752    4504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:33:39.332640    4504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:33:39.336689    4504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:33:39.339592    4504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:33:39.346650    4504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:33:39.348232    4504 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:39.348283    4504 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:33:39.354725    4504 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:33:39.361499    4504 start.go:297] selected driver: qemu2
	I0927 10:33:39.361506    4504 start.go:901] validating driver "qemu2" against &{Name:multinode-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:33:39.361558    4504 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:33:39.363979    4504 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:33:39.364005    4504 cni.go:84] Creating CNI manager for ""
	I0927 10:33:39.364035    4504 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 10:33:39.364085    4504 start.go:340] cluster config:
	{Name:multinode-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-874000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:33:39.367900    4504 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:33:39.374721    4504 out.go:177] * Starting "multinode-874000" primary control-plane node in "multinode-874000" cluster
	I0927 10:33:39.378602    4504 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:33:39.378618    4504 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:33:39.378626    4504 cache.go:56] Caching tarball of preloaded images
	I0927 10:33:39.378694    4504 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:33:39.378700    4504 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:33:39.378755    4504 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/multinode-874000/config.json ...
	I0927 10:33:39.379221    4504 start.go:360] acquireMachinesLock for multinode-874000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:33:39.379261    4504 start.go:364] duration metric: took 31.959µs to acquireMachinesLock for "multinode-874000"
	I0927 10:33:39.379270    4504 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:33:39.379274    4504 fix.go:54] fixHost starting: 
	I0927 10:33:39.379402    4504 fix.go:112] recreateIfNeeded on multinode-874000: state=Stopped err=<nil>
	W0927 10:33:39.379410    4504 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:33:39.387682    4504 out.go:177] * Restarting existing qemu2 VM for "multinode-874000" ...
	I0927 10:33:39.391665    4504 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:33:39.391704    4504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:21:33:7a:a5:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:33:39.393860    4504 main.go:141] libmachine: STDOUT: 
	I0927 10:33:39.393884    4504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:33:39.393916    4504 fix.go:56] duration metric: took 14.639875ms for fixHost
	I0927 10:33:39.393921    4504 start.go:83] releasing machines lock for "multinode-874000", held for 14.65625ms
	W0927 10:33:39.393930    4504 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:33:39.393969    4504 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:33:39.393974    4504 start.go:729] Will try again in 5 seconds ...
	I0927 10:33:44.396030    4504 start.go:360] acquireMachinesLock for multinode-874000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:33:44.396468    4504 start.go:364] duration metric: took 346.583µs to acquireMachinesLock for "multinode-874000"
	I0927 10:33:44.396606    4504 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:33:44.396624    4504 fix.go:54] fixHost starting: 
	I0927 10:33:44.397297    4504 fix.go:112] recreateIfNeeded on multinode-874000: state=Stopped err=<nil>
	W0927 10:33:44.397322    4504 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:33:44.406613    4504 out.go:177] * Restarting existing qemu2 VM for "multinode-874000" ...
	I0927 10:33:44.410745    4504 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:33:44.410989    4504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:21:33:7a:a5:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:33:44.419842    4504 main.go:141] libmachine: STDOUT: 
	I0927 10:33:44.419903    4504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:33:44.419974    4504 fix.go:56] duration metric: took 23.352166ms for fixHost
	I0927 10:33:44.419994    4504 start.go:83] releasing machines lock for "multinode-874000", held for 23.503834ms
	W0927 10:33:44.420206    4504 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-874000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-874000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:33:44.427638    4504 out.go:201] 
	W0927 10:33:44.431809    4504 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:33:44.431956    4504 out.go:270] * 
	* 
	W0927 10:33:44.434955    4504 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:33:44.441719    4504 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-874000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-874000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (33.102542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 node delete m03: exit status 83 (39.287417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-874000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-874000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-874000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr: exit status 7 (30.550959ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:44.627859    4521 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:44.628001    4521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:44.628004    4521 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:44.628006    4521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:44.628151    4521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:44.628277    4521 out.go:352] Setting JSON to false
	I0927 10:33:44.628288    4521 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:33:44.628359    4521 notify.go:220] Checking for updates...
	I0927 10:33:44.628495    4521 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:44.628503    4521 status.go:174] checking status of multinode-874000 ...
	I0927 10:33:44.628737    4521 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:33:44.628741    4521 status.go:377] host is not running, skipping remaining checks
	I0927 10:33:44.628743    4521 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (30.415792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-874000 stop: (1.99478475s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status: exit status 7 (64.542459ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr: exit status 7 (33.192291ms)

                                                
                                                
-- stdout --
	multinode-874000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:46.751500    4541 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:46.751662    4541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:46.751665    4541 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:46.751667    4541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:46.751798    4541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:46.751927    4541 out.go:352] Setting JSON to false
	I0927 10:33:46.751941    4541 mustload.go:65] Loading cluster: multinode-874000
	I0927 10:33:46.752003    4541 notify.go:220] Checking for updates...
	I0927 10:33:46.752158    4541 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:46.752171    4541 status.go:174] checking status of multinode-874000 ...
	I0927 10:33:46.752407    4541 status.go:364] multinode-874000 host status = "Stopped" (err=<nil>)
	I0927 10:33:46.752411    4541 status.go:377] host is not running, skipping remaining checks
	I0927 10:33:46.752413    4541 status.go:176] multinode-874000 status: &{Name:multinode-874000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr": multinode-874000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-874000 status --alsologtostderr": multinode-874000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (29.373667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-874000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-874000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184568709s)

                                                
                                                
-- stdout --
	* [multinode-874000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-874000" primary control-plane node in "multinode-874000" cluster
	* Restarting existing qemu2 VM for "multinode-874000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-874000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:33:46.810650    4545 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:33:46.810776    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:46.810779    4545 out.go:358] Setting ErrFile to fd 2...
	I0927 10:33:46.810782    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:33:46.810930    4545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:33:46.811923    4545 out.go:352] Setting JSON to false
	I0927 10:33:46.827834    4545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3790,"bootTime":1727454636,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:33:46.827898    4545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:33:46.832761    4545 out.go:177] * [multinode-874000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:33:46.839691    4545 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:33:46.839793    4545 notify.go:220] Checking for updates...
	I0927 10:33:46.847571    4545 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:33:46.851674    4545 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:33:46.854739    4545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:33:46.857713    4545 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:33:46.860689    4545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:33:46.864065    4545 config.go:182] Loaded profile config "multinode-874000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:33:46.864338    4545 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:33:46.868702    4545 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:33:46.875745    4545 start.go:297] selected driver: qemu2
	I0927 10:33:46.875753    4545 start.go:901] validating driver "qemu2" against &{Name:multinode-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:33:46.875821    4545 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:33:46.878124    4545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:33:46.878148    4545 cni.go:84] Creating CNI manager for ""
	I0927 10:33:46.878173    4545 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 10:33:46.878218    4545 start.go:340] cluster config:
	{Name:multinode-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-874000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:33:46.881757    4545 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:33:46.889709    4545 out.go:177] * Starting "multinode-874000" primary control-plane node in "multinode-874000" cluster
	I0927 10:33:46.893550    4545 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:33:46.893577    4545 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:33:46.893591    4545 cache.go:56] Caching tarball of preloaded images
	I0927 10:33:46.893636    4545 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:33:46.893641    4545 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:33:46.893700    4545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/multinode-874000/config.json ...
	I0927 10:33:46.894146    4545 start.go:360] acquireMachinesLock for multinode-874000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:33:46.894177    4545 start.go:364] duration metric: took 24.791µs to acquireMachinesLock for "multinode-874000"
	I0927 10:33:46.894186    4545 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:33:46.894192    4545 fix.go:54] fixHost starting: 
	I0927 10:33:46.894307    4545 fix.go:112] recreateIfNeeded on multinode-874000: state=Stopped err=<nil>
	W0927 10:33:46.894316    4545 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:33:46.897747    4545 out.go:177] * Restarting existing qemu2 VM for "multinode-874000" ...
	I0927 10:33:46.905683    4545 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:33:46.905725    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:21:33:7a:a5:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:33:46.907733    4545 main.go:141] libmachine: STDOUT: 
	I0927 10:33:46.907752    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:33:46.907782    4545 fix.go:56] duration metric: took 13.590375ms for fixHost
	I0927 10:33:46.907787    4545 start.go:83] releasing machines lock for "multinode-874000", held for 13.60625ms
	W0927 10:33:46.907794    4545 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:33:46.907828    4545 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:33:46.907832    4545 start.go:729] Will try again in 5 seconds ...
	I0927 10:33:51.909939    4545 start.go:360] acquireMachinesLock for multinode-874000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:33:51.910320    4545 start.go:364] duration metric: took 296.917µs to acquireMachinesLock for "multinode-874000"
	I0927 10:33:51.910450    4545 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:33:51.910472    4545 fix.go:54] fixHost starting: 
	I0927 10:33:51.911229    4545 fix.go:112] recreateIfNeeded on multinode-874000: state=Stopped err=<nil>
	W0927 10:33:51.911257    4545 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:33:51.915665    4545 out.go:177] * Restarting existing qemu2 VM for "multinode-874000" ...
	I0927 10:33:51.923639    4545 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:33:51.923897    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:21:33:7a:a5:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/multinode-874000/disk.qcow2
	I0927 10:33:51.932995    4545 main.go:141] libmachine: STDOUT: 
	I0927 10:33:51.933047    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:33:51.933116    4545 fix.go:56] duration metric: took 22.643542ms for fixHost
	I0927 10:33:51.933136    4545 start.go:83] releasing machines lock for "multinode-874000", held for 22.797209ms
	W0927 10:33:51.933281    4545 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-874000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-874000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:33:51.940530    4545 out.go:201] 
	W0927 10:33:51.944834    4545 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:33:51.944856    4545 out.go:270] * 
	* 
	W0927 10:33:51.947577    4545 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:33:51.954712    4545 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-874000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (68.078083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-874000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-874000-m01 --driver=qemu2 
E0927 10:33:53.617757    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-874000-m01 --driver=qemu2 : exit status 80 (10.259490667s)

                                                
                                                
-- stdout --
	* [multinode-874000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-874000-m01" primary control-plane node in "multinode-874000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-874000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-874000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-874000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-874000-m02 --driver=qemu2 : exit status 80 (9.969381458s)

                                                
                                                
-- stdout --
	* [multinode-874000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-874000-m02" primary control-plane node in "multinode-874000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-874000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-874000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-874000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-874000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-874000: exit status 83 (80.265458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-874000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-874000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-874000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000
E0927 10:34:12.491941    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-874000 -n multinode-874000: exit status 7 (31.293375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-874000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.46s)

                                                
                                    
x
+
TestPreload (10.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-845000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-845000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.893620292s)

                                                
                                                
-- stdout --
	* [test-preload-845000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-845000" primary control-plane node in "test-preload-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:34:12.635204    4599 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:34:12.635313    4599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:34:12.635315    4599 out.go:358] Setting ErrFile to fd 2...
	I0927 10:34:12.635318    4599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:34:12.635435    4599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:34:12.636491    4599 out.go:352] Setting JSON to false
	I0927 10:34:12.652651    4599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3816,"bootTime":1727454636,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:34:12.652724    4599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:34:12.658852    4599 out.go:177] * [test-preload-845000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:34:12.666843    4599 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:34:12.666901    4599 notify.go:220] Checking for updates...
	I0927 10:34:12.672327    4599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:34:12.675783    4599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:34:12.678835    4599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:34:12.681810    4599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:34:12.684764    4599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:34:12.688213    4599 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:34:12.688265    4599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:34:12.692805    4599 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:34:12.699784    4599 start.go:297] selected driver: qemu2
	I0927 10:34:12.699790    4599 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:34:12.699796    4599 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:34:12.702156    4599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:34:12.704804    4599 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:34:12.707936    4599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:34:12.707961    4599 cni.go:84] Creating CNI manager for ""
	I0927 10:34:12.707988    4599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:34:12.707993    4599 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:34:12.708028    4599 start.go:340] cluster config:
	{Name:test-preload-845000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:34:12.711766    4599 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.719608    4599 out.go:177] * Starting "test-preload-845000" primary control-plane node in "test-preload-845000" cluster
	I0927 10:34:12.723814    4599 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0927 10:34:12.723894    4599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/test-preload-845000/config.json ...
	I0927 10:34:12.723911    4599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/test-preload-845000/config.json: {Name:mk5aabfd12b027ebd99a31115c6b9c42f2c1bcfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:34:12.723909    4599 cache.go:107] acquiring lock: {Name:mkf48093fa971191f71c46f781d51b0e356458e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.723932    4599 cache.go:107] acquiring lock: {Name:mk40839380edd0a5044657994ade7f9457d4e70b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.723929    4599 cache.go:107] acquiring lock: {Name:mk016710de6799848165d9d511178ba216f995c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.723954    4599 cache.go:107] acquiring lock: {Name:mk4bfaf5c319561ed11d43ddcb0dbd6d69428a79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.723911    4599 cache.go:107] acquiring lock: {Name:mk0c0e6039af75147e81467e0007da2e8e1752a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.724120    4599 cache.go:107] acquiring lock: {Name:mkc1009f687a53adcc563343a5e96230ecba98d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.724170    4599 cache.go:107] acquiring lock: {Name:mk099dbbb2ffeef3a95100fc16e19dba93b2a5d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.724155    4599 cache.go:107] acquiring lock: {Name:mkcc0501d5322e50da7bd9d7d9f41d45016ff81f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:34:12.724201    4599 start.go:360] acquireMachinesLock for test-preload-845000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:34:12.724314    4599 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0927 10:34:12.724428    4599 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0927 10:34:12.724319    4599 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0927 10:34:12.724474    4599 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:34:12.724487    4599 start.go:364] duration metric: took 183.334µs to acquireMachinesLock for "test-preload-845000"
	I0927 10:34:12.724548    4599 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0927 10:34:12.724572    4599 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 10:34:12.724592    4599 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:34:12.724636    4599 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:34:12.724534    4599 start.go:93] Provisioning new machine with config: &{Name:test-preload-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:34:12.724689    4599 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:34:12.732793    4599 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:34:12.737573    4599 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0927 10:34:12.737948    4599 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:34:12.738206    4599 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0927 10:34:12.740205    4599 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 10:34:12.740244    4599 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:34:12.740266    4599 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:34:12.740287    4599 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0927 10:34:12.740311    4599 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0927 10:34:12.751334    4599 start.go:159] libmachine.API.Create for "test-preload-845000" (driver="qemu2")
	I0927 10:34:12.751362    4599 client.go:168] LocalClient.Create starting
	I0927 10:34:12.751459    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:34:12.751491    4599 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:12.751502    4599 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:12.751540    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:34:12.751564    4599 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:12.751573    4599 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:12.751936    4599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:34:12.912775    4599 main.go:141] libmachine: Creating SSH key...
	I0927 10:34:12.967558    4599 main.go:141] libmachine: Creating Disk image...
	I0927 10:34:12.967587    4599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:34:12.967808    4599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2
	I0927 10:34:12.977814    4599 main.go:141] libmachine: STDOUT: 
	I0927 10:34:12.977835    4599 main.go:141] libmachine: STDERR: 
	I0927 10:34:12.977908    4599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2 +20000M
	I0927 10:34:12.986618    4599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:34:12.986643    4599 main.go:141] libmachine: STDERR: 
	I0927 10:34:12.986669    4599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2
	I0927 10:34:12.986676    4599 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:34:12.986691    4599 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:34:12.986719    4599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:95:48:f5:b8:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2
	I0927 10:34:12.989139    4599 main.go:141] libmachine: STDOUT: 
	I0927 10:34:12.989161    4599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:34:12.989183    4599 client.go:171] duration metric: took 237.820625ms to LocalClient.Create
	I0927 10:34:13.193285    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0927 10:34:13.207432    4599 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0927 10:34:13.207458    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0927 10:34:13.229372    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0927 10:34:13.232103    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0927 10:34:13.297923    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0927 10:34:13.335422    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0927 10:34:13.344773    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0927 10:34:13.352746    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0927 10:34:13.352775    4599 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 628.866167ms
	I0927 10:34:13.352799    4599 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0927 10:34:13.993690    4599 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0927 10:34:13.993802    4599 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 10:34:14.487300    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0927 10:34:14.487352    4599 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.76348425s
	I0927 10:34:14.487377    4599 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0927 10:34:14.989447    4599 start.go:128] duration metric: took 2.264788958s to createHost
	I0927 10:34:14.989508    4599 start.go:83] releasing machines lock for "test-preload-845000", held for 2.265069042s
	W0927 10:34:14.989580    4599 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:15.007971    4599 out.go:177] * Deleting "test-preload-845000" in qemu2 ...
	W0927 10:34:15.045235    4599 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:15.045263    4599 start.go:729] Will try again in 5 seconds ...
	I0927 10:34:15.532436    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0927 10:34:15.532506    4599 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.808469833s
	I0927 10:34:15.532538    4599 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0927 10:34:15.928028    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0927 10:34:15.928102    4599 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.204225375s
	I0927 10:34:15.928151    4599 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0927 10:34:17.236814    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0927 10:34:17.236867    4599 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.513075084s
	I0927 10:34:17.236895    4599 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0927 10:34:17.277994    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0927 10:34:17.278041    4599 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.554223667s
	I0927 10:34:17.278064    4599 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0927 10:34:17.798008    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0927 10:34:17.798054    4599 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.074059167s
	I0927 10:34:17.798084    4599 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0927 10:34:20.045437    4599 start.go:360] acquireMachinesLock for test-preload-845000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:34:20.045898    4599 start.go:364] duration metric: took 373.25µs to acquireMachinesLock for "test-preload-845000"
	I0927 10:34:20.046019    4599 start.go:93] Provisioning new machine with config: &{Name:test-preload-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:34:20.046220    4599 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:34:20.052822    4599 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:34:20.104796    4599 start.go:159] libmachine.API.Create for "test-preload-845000" (driver="qemu2")
	I0927 10:34:20.104833    4599 client.go:168] LocalClient.Create starting
	I0927 10:34:20.104946    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:34:20.105017    4599 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:20.105059    4599 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:20.105126    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:34:20.105170    4599 main.go:141] libmachine: Decoding PEM data...
	I0927 10:34:20.105183    4599 main.go:141] libmachine: Parsing certificate...
	I0927 10:34:20.105694    4599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:34:20.275029    4599 main.go:141] libmachine: Creating SSH key...
	I0927 10:34:20.426470    4599 main.go:141] libmachine: Creating Disk image...
	I0927 10:34:20.426476    4599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:34:20.426674    4599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2
	I0927 10:34:20.436242    4599 main.go:141] libmachine: STDOUT: 
	I0927 10:34:20.436270    4599 main.go:141] libmachine: STDERR: 
	I0927 10:34:20.436346    4599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2 +20000M
	I0927 10:34:20.444528    4599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:34:20.444543    4599 main.go:141] libmachine: STDERR: 
	I0927 10:34:20.444561    4599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2
	I0927 10:34:20.444567    4599 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:34:20.444575    4599 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:34:20.444619    4599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:09:63:6f:d4:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/test-preload-845000/disk.qcow2
	I0927 10:34:20.446330    4599 main.go:141] libmachine: STDOUT: 
	I0927 10:34:20.446346    4599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:34:20.446361    4599 client.go:171] duration metric: took 341.531875ms to LocalClient.Create
	I0927 10:34:22.132935    4599 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0927 10:34:22.133001    4599 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.409065916s
	I0927 10:34:22.133028    4599 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0927 10:34:22.133066    4599 cache.go:87] Successfully saved all images to host disk.
	I0927 10:34:22.448490    4599 start.go:128] duration metric: took 2.402302333s to createHost
	I0927 10:34:22.448537    4599 start.go:83] releasing machines lock for "test-preload-845000", held for 2.402679333s
	W0927 10:34:22.448798    4599 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:34:22.465517    4599 out.go:201] 
	W0927 10:34:22.471932    4599 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:34:22.471958    4599 out.go:270] * 
	* 
	W0927 10:34:22.474659    4599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:34:22.486357    4599 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-845000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-27 10:34:22.503959 -0700 PDT m=+2363.878288918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-845000 -n test-preload-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-845000 -n test-preload-845000: exit status 7 (66.780625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-845000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-845000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-845000
--- FAIL: TestPreload (10.04s)

                                                
                                    
x
+
TestScheduledStopUnix (10.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-823000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-823000 --memory=2048 --driver=qemu2 : exit status 80 (9.925119375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-823000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-823000" primary control-plane node in "scheduled-stop-823000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-823000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-823000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-823000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-823000" primary control-plane node in "scheduled-stop-823000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-823000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-823000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-27 10:34:32.576353 -0700 PDT m=+2373.950945626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-823000 -n scheduled-stop-823000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-823000 -n scheduled-stop-823000: exit status 7 (68.703167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-823000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-823000
--- FAIL: TestScheduledStopUnix (10.08s)

                                                
                                    
x
+
TestSkaffold (13.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2157734367 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2157734367 version: (1.102455083s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-132000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-132000 --memory=2600 --driver=qemu2 : exit status 80 (10.029880958s)

                                                
                                                
-- stdout --
	* [skaffold-132000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-132000" primary control-plane node in "skaffold-132000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-132000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-132000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-132000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-132000" primary control-plane node in "skaffold-132000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-132000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-132000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-27 10:34:45.90544 -0700 PDT m=+2387.280379835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-132000 -n skaffold-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-132000 -n skaffold-132000: exit status 7 (61.960959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-132000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-132000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-132000
--- FAIL: TestSkaffold (13.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (600.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2882539891 start -p running-upgrade-198000 --memory=2200 --vm-driver=qemu2 
E0927 10:35:35.575843    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2882539891 start -p running-upgrade-198000 --memory=2200 --vm-driver=qemu2 : (54.642066167s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-198000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-198000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m29.63447225s)

                                                
                                                
-- stdout --
	* [running-upgrade-198000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-198000" primary control-plane node in "running-upgrade-198000" cluster
	* Updating the running qemu2 "running-upgrade-198000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:36:26.167523    5001 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:36:26.167666    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:36:26.167670    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:36:26.167673    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:36:26.167791    5001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:36:26.168755    5001 out.go:352] Setting JSON to false
	I0927 10:36:26.185129    5001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3950,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:36:26.185206    5001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:36:26.188180    5001 out.go:177] * [running-upgrade-198000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:36:26.194828    5001 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:36:26.194885    5001 notify.go:220] Checking for updates...
	I0927 10:36:26.201796    5001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:36:26.205706    5001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:36:26.208686    5001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:36:26.211804    5001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:36:26.214836    5001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:36:26.218107    5001 config.go:182] Loaded profile config "running-upgrade-198000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:36:26.221721    5001 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 10:36:26.224836    5001 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:36:26.228691    5001 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:36:26.235780    5001 start.go:297] selected driver: qemu2
	I0927 10:36:26.235786    5001 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50287 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:36:26.235831    5001 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:36:26.238238    5001 cni.go:84] Creating CNI manager for ""
	I0927 10:36:26.238272    5001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:36:26.238296    5001 start.go:340] cluster config:
	{Name:running-upgrade-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50287 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:36:26.238358    5001 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:36:26.245762    5001 out.go:177] * Starting "running-upgrade-198000" primary control-plane node in "running-upgrade-198000" cluster
	I0927 10:36:26.249813    5001 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0927 10:36:26.249828    5001 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0927 10:36:26.249835    5001 cache.go:56] Caching tarball of preloaded images
	I0927 10:36:26.249892    5001 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:36:26.249897    5001 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0927 10:36:26.249953    5001 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/config.json ...
	I0927 10:36:26.250414    5001 start.go:360] acquireMachinesLock for running-upgrade-198000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:36:26.250443    5001 start.go:364] duration metric: took 23.792µs to acquireMachinesLock for "running-upgrade-198000"
	I0927 10:36:26.250451    5001 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:36:26.250456    5001 fix.go:54] fixHost starting: 
	I0927 10:36:26.251063    5001 fix.go:112] recreateIfNeeded on running-upgrade-198000: state=Running err=<nil>
	W0927 10:36:26.251071    5001 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:36:26.255779    5001 out.go:177] * Updating the running qemu2 "running-upgrade-198000" VM ...
	I0927 10:36:26.263763    5001 machine.go:93] provisionDockerMachine start ...
	I0927 10:36:26.263804    5001 main.go:141] libmachine: Using SSH client type: native
	I0927 10:36:26.263908    5001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10131dc00] 0x101320440 <nil>  [] 0s} localhost 50255 <nil> <nil>}
	I0927 10:36:26.263913    5001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 10:36:26.324993    5001 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-198000
	
	I0927 10:36:26.325011    5001 buildroot.go:166] provisioning hostname "running-upgrade-198000"
	I0927 10:36:26.325071    5001 main.go:141] libmachine: Using SSH client type: native
	I0927 10:36:26.325189    5001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10131dc00] 0x101320440 <nil>  [] 0s} localhost 50255 <nil> <nil>}
	I0927 10:36:26.325195    5001 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-198000 && echo "running-upgrade-198000" | sudo tee /etc/hostname
	I0927 10:36:26.381380    5001 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-198000
	
	I0927 10:36:26.381442    5001 main.go:141] libmachine: Using SSH client type: native
	I0927 10:36:26.381551    5001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10131dc00] 0x101320440 <nil>  [] 0s} localhost 50255 <nil> <nil>}
	I0927 10:36:26.381560    5001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-198000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-198000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-198000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 10:36:26.435517    5001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 10:36:26.435527    5001 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19712-1508/.minikube CaCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19712-1508/.minikube}
	I0927 10:36:26.435535    5001 buildroot.go:174] setting up certificates
	I0927 10:36:26.435545    5001 provision.go:84] configureAuth start
	I0927 10:36:26.435551    5001 provision.go:143] copyHostCerts
	I0927 10:36:26.435601    5001 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem, removing ...
	I0927 10:36:26.435609    5001 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem
	I0927 10:36:26.435725    5001 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem (1078 bytes)
	I0927 10:36:26.435899    5001 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem, removing ...
	I0927 10:36:26.435908    5001 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem
	I0927 10:36:26.435955    5001 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem (1123 bytes)
	I0927 10:36:26.436061    5001 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem, removing ...
	I0927 10:36:26.436065    5001 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem
	I0927 10:36:26.436107    5001 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem (1679 bytes)
	I0927 10:36:26.436184    5001 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-198000 san=[127.0.0.1 localhost minikube running-upgrade-198000]
	I0927 10:36:26.491161    5001 provision.go:177] copyRemoteCerts
	I0927 10:36:26.491201    5001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 10:36:26.491207    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	I0927 10:36:26.521505    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 10:36:26.527945    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0927 10:36:26.535515    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 10:36:26.542427    5001 provision.go:87] duration metric: took 106.874625ms to configureAuth
	I0927 10:36:26.542435    5001 buildroot.go:189] setting minikube options for container-runtime
	I0927 10:36:26.542545    5001 config.go:182] Loaded profile config "running-upgrade-198000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:36:26.542584    5001 main.go:141] libmachine: Using SSH client type: native
	I0927 10:36:26.542676    5001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10131dc00] 0x101320440 <nil>  [] 0s} localhost 50255 <nil> <nil>}
	I0927 10:36:26.542681    5001 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 10:36:26.596022    5001 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0927 10:36:26.596031    5001 buildroot.go:70] root file system type: tmpfs
	I0927 10:36:26.596085    5001 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 10:36:26.596146    5001 main.go:141] libmachine: Using SSH client type: native
	I0927 10:36:26.596266    5001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10131dc00] 0x101320440 <nil>  [] 0s} localhost 50255 <nil> <nil>}
	I0927 10:36:26.596299    5001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 10:36:26.654926    5001 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 10:36:26.654973    5001 main.go:141] libmachine: Using SSH client type: native
	I0927 10:36:26.655074    5001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10131dc00] 0x101320440 <nil>  [] 0s} localhost 50255 <nil> <nil>}
	I0927 10:36:26.655086    5001 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 10:36:26.711884    5001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 10:36:26.711893    5001 machine.go:96] duration metric: took 448.135584ms to provisionDockerMachine
	I0927 10:36:26.711901    5001 start.go:293] postStartSetup for "running-upgrade-198000" (driver="qemu2")
	I0927 10:36:26.711907    5001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 10:36:26.711950    5001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 10:36:26.711961    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	I0927 10:36:26.744272    5001 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 10:36:26.745587    5001 info.go:137] Remote host: Buildroot 2021.02.12
	I0927 10:36:26.745596    5001 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/addons for local assets ...
	I0927 10:36:26.745662    5001 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/files for local assets ...
	I0927 10:36:26.745766    5001 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem -> 20392.pem in /etc/ssl/certs
	I0927 10:36:26.745860    5001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 10:36:26.748347    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:36:26.756856    5001 start.go:296] duration metric: took 44.949792ms for postStartSetup
	I0927 10:36:26.756869    5001 fix.go:56] duration metric: took 506.428209ms for fixHost
	I0927 10:36:26.756913    5001 main.go:141] libmachine: Using SSH client type: native
	I0927 10:36:26.757027    5001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10131dc00] 0x101320440 <nil>  [] 0s} localhost 50255 <nil> <nil>}
	I0927 10:36:26.757035    5001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 10:36:26.809833    5001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458586.682299763
	
	I0927 10:36:26.809842    5001 fix.go:216] guest clock: 1727458586.682299763
	I0927 10:36:26.809846    5001 fix.go:229] Guest: 2024-09-27 10:36:26.682299763 -0700 PDT Remote: 2024-09-27 10:36:26.756871 -0700 PDT m=+0.609460043 (delta=-74.571237ms)
	I0927 10:36:26.809857    5001 fix.go:200] guest clock delta is within tolerance: -74.571237ms
	I0927 10:36:26.809860    5001 start.go:83] releasing machines lock for "running-upgrade-198000", held for 559.426792ms
	I0927 10:36:26.809927    5001 ssh_runner.go:195] Run: cat /version.json
	I0927 10:36:26.809937    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	I0927 10:36:26.809927    5001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 10:36:26.809973    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	W0927 10:36:26.810557    5001 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50255: connect: connection refused
	I0927 10:36:26.810578    5001 retry.go:31] will retry after 194.105086ms: dial tcp [::1]:50255: connect: connection refused
	W0927 10:36:27.043671    5001 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0927 10:36:27.043756    5001 ssh_runner.go:195] Run: systemctl --version
	I0927 10:36:27.046589    5001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 10:36:27.049373    5001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 10:36:27.049425    5001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0927 10:36:27.053388    5001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0927 10:36:27.059457    5001 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 10:36:27.059466    5001 start.go:495] detecting cgroup driver to use...
	I0927 10:36:27.059537    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:36:27.065924    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0927 10:36:27.069194    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 10:36:27.072368    5001 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 10:36:27.072398    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 10:36:27.075608    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:36:27.078826    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 10:36:27.081874    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:36:27.084684    5001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 10:36:27.087928    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 10:36:27.091335    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 10:36:27.094772    5001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 10:36:27.097842    5001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 10:36:27.100432    5001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 10:36:27.103561    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:36:27.190874    5001 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 10:36:27.201930    5001 start.go:495] detecting cgroup driver to use...
	I0927 10:36:27.202019    5001 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 10:36:27.207109    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:36:27.211379    5001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 10:36:27.218864    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:36:27.223600    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 10:36:27.228736    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:36:27.233974    5001 ssh_runner.go:195] Run: which cri-dockerd
	I0927 10:36:27.235292    5001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 10:36:27.238558    5001 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0927 10:36:27.243448    5001 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 10:36:27.338892    5001 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 10:36:27.410232    5001 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 10:36:27.410289    5001 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 10:36:27.415615    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:36:27.514182    5001 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 10:36:30.211096    5001 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.696968083s)
	I0927 10:36:30.211156    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 10:36:30.215715    5001 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0927 10:36:30.222169    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:36:30.228001    5001 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 10:36:30.300596    5001 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 10:36:30.356711    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:36:30.442300    5001 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 10:36:30.448206    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:36:30.452873    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:36:30.510430    5001 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 10:36:30.551800    5001 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 10:36:30.551898    5001 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 10:36:30.554052    5001 start.go:563] Will wait 60s for crictl version
	I0927 10:36:30.554113    5001 ssh_runner.go:195] Run: which crictl
	I0927 10:36:30.556274    5001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 10:36:30.568082    5001 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0927 10:36:30.568169    5001 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:36:30.585703    5001 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:36:30.606744    5001 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0927 10:36:30.606824    5001 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0927 10:36:30.608185    5001 kubeadm.go:883] updating cluster {Name:running-upgrade-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50287 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0927 10:36:30.608228    5001 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0927 10:36:30.608278    5001 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:36:30.618857    5001 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 10:36:30.618866    5001 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0927 10:36:30.618924    5001 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 10:36:30.622131    5001 ssh_runner.go:195] Run: which lz4
	I0927 10:36:30.623377    5001 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 10:36:30.624710    5001 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 10:36:30.624723    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0927 10:36:31.519791    5001 docker.go:649] duration metric: took 896.480625ms to copy over tarball
	I0927 10:36:31.519848    5001 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 10:36:32.634154    5001 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.114322292s)
	I0927 10:36:32.634168    5001 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 10:36:32.649648    5001 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 10:36:32.652600    5001 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0927 10:36:32.657572    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:36:32.738327    5001 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 10:36:33.922082    5001 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.183769459s)
	I0927 10:36:33.922202    5001 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:36:33.941195    5001 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 10:36:33.941207    5001 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0927 10:36:33.941214    5001 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 10:36:33.945404    5001 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:36:33.947314    5001 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:36:33.949937    5001 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:36:33.949939    5001 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:36:33.950896    5001 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:36:33.951183    5001 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:36:33.952775    5001 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:36:33.952882    5001 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:36:33.954021    5001 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:36:33.954032    5001 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:36:33.955323    5001 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:36:33.955378    5001 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:36:33.956223    5001 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0927 10:36:33.956276    5001 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:36:33.957073    5001 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:36:33.958193    5001 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0927 10:36:34.337774    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:36:34.351497    5001 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0927 10:36:34.351533    5001 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:36:34.351605    5001 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:36:34.362238    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0927 10:36:34.364482    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0927 10:36:34.365653    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:36:34.375574    5001 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0927 10:36:34.375602    5001 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:36:34.375680    5001 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0927 10:36:34.375834    5001 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0927 10:36:34.375844    5001 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:36:34.375885    5001 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:36:34.385911    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0927 10:36:34.386056    5001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0927 10:36:34.387221    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0927 10:36:34.387832    5001 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0927 10:36:34.387848    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0927 10:36:34.414664    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0927 10:36:34.420044    5001 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0927 10:36:34.420179    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:36:34.439427    5001 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0927 10:36:34.439460    5001 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:36:34.439540    5001 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:36:34.444728    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:36:34.456811    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0927 10:36:34.461543    5001 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0927 10:36:34.461564    5001 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:36:34.461625    5001 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:36:34.488730    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0927 10:36:34.488791    5001 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0927 10:36:34.488808    5001 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:36:34.488870    5001 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:36:34.513274    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0927 10:36:34.513335    5001 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0927 10:36:34.513359    5001 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0927 10:36:34.513410    5001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0927 10:36:34.513411    5001 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0927 10:36:34.555714    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0927 10:36:34.555734    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0927 10:36:34.555754    5001 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0927 10:36:34.555766    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0927 10:36:34.555850    5001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0927 10:36:34.571146    5001 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0927 10:36:34.571170    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0927 10:36:34.607544    5001 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0927 10:36:34.607564    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0927 10:36:34.711254    5001 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0927 10:36:34.711275    5001 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0927 10:36:34.711282    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0927 10:36:34.766907    5001 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0927 10:36:34.766936    5001 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0927 10:36:34.766943    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0927 10:36:34.897746    5001 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0927 10:36:34.973112    5001 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0927 10:36:34.973250    5001 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:36:34.986093    5001 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0927 10:36:34.986120    5001 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:36:34.986189    5001 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:36:36.005793    5001 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.019591583s)
	I0927 10:36:36.005833    5001 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 10:36:36.006299    5001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 10:36:36.012321    5001 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0927 10:36:36.012366    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0927 10:36:36.066364    5001 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 10:36:36.066388    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0927 10:36:36.304585    5001 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 10:36:36.304630    5001 cache_images.go:92] duration metric: took 2.363470917s to LoadCachedImages
	W0927 10:36:36.304684    5001 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0927 10:36:36.304692    5001 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0927 10:36:36.304754    5001 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-198000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 10:36:36.304831    5001 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 10:36:36.318088    5001 cni.go:84] Creating CNI manager for ""
	I0927 10:36:36.318104    5001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:36:36.318110    5001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 10:36:36.318118    5001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-198000 NodeName:running-upgrade-198000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 10:36:36.318186    5001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-198000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 10:36:36.318244    5001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0927 10:36:36.321515    5001 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 10:36:36.321548    5001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 10:36:36.324149    5001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0927 10:36:36.329193    5001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 10:36:36.334100    5001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0927 10:36:36.339760    5001 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0927 10:36:36.341334    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:36:36.424738    5001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:36:36.429717    5001 certs.go:68] Setting up /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000 for IP: 10.0.2.15
	I0927 10:36:36.429725    5001 certs.go:194] generating shared ca certs ...
	I0927 10:36:36.429733    5001 certs.go:226] acquiring lock for ca certs: {Name:mk0418f7d8f4c252d010b1c431fe702739668245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:36:36.429906    5001 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key
	I0927 10:36:36.429961    5001 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key
	I0927 10:36:36.429966    5001 certs.go:256] generating profile certs ...
	I0927 10:36:36.430042    5001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/client.key
	I0927 10:36:36.430059    5001 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.key.56e2b459
	I0927 10:36:36.430072    5001 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.crt.56e2b459 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0927 10:36:36.495276    5001 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.crt.56e2b459 ...
	I0927 10:36:36.495280    5001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.crt.56e2b459: {Name:mka5ded446c100d0dc8ca875ba2e5f58543c0843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:36:36.495864    5001 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.key.56e2b459 ...
	I0927 10:36:36.495869    5001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.key.56e2b459: {Name:mkc8e73a58d3ce521df4d16df79733fa4914ce28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:36:36.496004    5001 certs.go:381] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.crt.56e2b459 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.crt
	I0927 10:36:36.496193    5001 certs.go:385] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.key.56e2b459 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.key
	I0927 10:36:36.496372    5001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/proxy-client.key
	I0927 10:36:36.496519    5001 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem (1338 bytes)
	W0927 10:36:36.496550    5001 certs.go:480] ignoring /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039_empty.pem, impossibly tiny 0 bytes
	I0927 10:36:36.496557    5001 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 10:36:36.496587    5001 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem (1078 bytes)
	I0927 10:36:36.496612    5001 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem (1123 bytes)
	I0927 10:36:36.496637    5001 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem (1679 bytes)
	I0927 10:36:36.496703    5001 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:36:36.497040    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 10:36:36.504389    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 10:36:36.511790    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 10:36:36.519272    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 10:36:36.526591    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 10:36:36.533375    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 10:36:36.540350    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 10:36:36.547503    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 10:36:36.554907    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem --> /usr/share/ca-certificates/2039.pem (1338 bytes)
	I0927 10:36:36.561938    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /usr/share/ca-certificates/20392.pem (1708 bytes)
	I0927 10:36:36.568390    5001 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 10:36:36.575357    5001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 10:36:36.580484    5001 ssh_runner.go:195] Run: openssl version
	I0927 10:36:36.582398    5001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 10:36:36.585359    5001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:36:36.586860    5001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:36:36.586885    5001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:36:36.588726    5001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 10:36:36.591661    5001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2039.pem && ln -fs /usr/share/ca-certificates/2039.pem /etc/ssl/certs/2039.pem"
	I0927 10:36:36.595049    5001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2039.pem
	I0927 10:36:36.596440    5001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:11 /usr/share/ca-certificates/2039.pem
	I0927 10:36:36.596460    5001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2039.pem
	I0927 10:36:36.598239    5001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2039.pem /etc/ssl/certs/51391683.0"
	I0927 10:36:36.600968    5001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20392.pem && ln -fs /usr/share/ca-certificates/20392.pem /etc/ssl/certs/20392.pem"
	I0927 10:36:36.604018    5001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20392.pem
	I0927 10:36:36.605514    5001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:11 /usr/share/ca-certificates/20392.pem
	I0927 10:36:36.605537    5001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20392.pem
	I0927 10:36:36.607156    5001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20392.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 10:36:36.610143    5001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 10:36:36.611555    5001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 10:36:36.613434    5001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 10:36:36.615159    5001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 10:36:36.617019    5001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 10:36:36.618888    5001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 10:36:36.620790    5001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 10:36:36.622430    5001 kubeadm.go:392] StartCluster: {Name:running-upgrade-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50287 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:36:36.622505    5001 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:36:36.633709    5001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 10:36:36.637015    5001 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 10:36:36.637026    5001 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 10:36:36.637061    5001 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 10:36:36.639997    5001 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:36:36.640224    5001 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-198000" does not appear in /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:36:36.640273    5001 kubeconfig.go:62] /Users/jenkins/minikube-integration/19712-1508/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-198000" cluster setting kubeconfig missing "running-upgrade-198000" context setting]
	I0927 10:36:36.640421    5001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:36:36.640858    5001 kapi.go:59] client config for running-upgrade-198000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/client.key", CAFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1028f65d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 10:36:36.641213    5001 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 10:36:36.644135    5001 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-198000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0927 10:36:36.644141    5001 kubeadm.go:1160] stopping kube-system containers ...
	I0927 10:36:36.644195    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:36:36.655530    5001 docker.go:483] Stopping containers: [a48f2917378a 848bcf54791b fd9fd01cd9fe 02fb480ec8ef 22672d08dd78 a4bbda14b841 26acebb976b8 0df791913d28 8037932109d8 58fc035fe7ca 77946e77d61b b5b08b7c4070 a0614ea353dd fd21cbcc080e]
	I0927 10:36:36.655597    5001 ssh_runner.go:195] Run: docker stop a48f2917378a 848bcf54791b fd9fd01cd9fe 02fb480ec8ef 22672d08dd78 a4bbda14b841 26acebb976b8 0df791913d28 8037932109d8 58fc035fe7ca 77946e77d61b b5b08b7c4070 a0614ea353dd fd21cbcc080e
	I0927 10:36:36.666411    5001 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 10:36:36.769058    5001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:36:36.773502    5001 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Sep 27 17:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 27 17:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 27 17:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 27 17:36 /etc/kubernetes/scheduler.conf
	
	I0927 10:36:36.773536    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/admin.conf
	I0927 10:36:36.777055    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:36:36.777092    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 10:36:36.780493    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/kubelet.conf
	I0927 10:36:36.783889    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:36:36.783917    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 10:36:36.787424    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/controller-manager.conf
	I0927 10:36:36.790858    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:36:36.790883    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:36:36.793982    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/scheduler.conf
	I0927 10:36:36.796622    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:36:36.796646    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:36:36.799578    5001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:36:36.802692    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:36:36.824083    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:36:37.085846    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:36:37.276409    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:36:37.297084    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:36:37.341331    5001 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:36:37.341412    5001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:36:37.843470    5001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:36:38.343464    5001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:36:38.347751    5001 api_server.go:72] duration metric: took 1.0064475s to wait for apiserver process to appear ...
	I0927 10:36:38.347762    5001 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:36:38.347780    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:36:43.349709    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:36:43.349734    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:36:48.349968    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:36:48.350052    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:36:53.350919    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:36:53.350999    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:36:58.351973    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:36:58.352080    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:03.353632    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:03.353739    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:08.355726    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:08.355814    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:13.358277    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:13.358376    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:18.361098    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:18.361183    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:23.363633    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:23.363722    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:28.365088    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:28.365201    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:33.367821    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:33.367926    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:38.370574    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:38.371158    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:37:38.412091    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:37:38.412254    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:37:38.434442    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:37:38.434579    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:37:38.449571    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:37:38.449650    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:37:38.465609    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:37:38.465691    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:37:38.476571    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:37:38.476640    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:37:38.494555    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:37:38.494621    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:37:38.504857    5001 logs.go:276] 0 containers: []
	W0927 10:37:38.504870    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:37:38.504941    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:37:38.515124    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:37:38.515154    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:37:38.515159    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:37:38.519924    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:37:38.519933    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:37:38.533374    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:37:38.533389    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:37:38.544322    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:37:38.544337    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:37:38.556250    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:37:38.556261    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:37:38.567562    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:37:38.567579    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:37:38.578990    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:37:38.578999    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:37:38.615766    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:37:38.615868    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:37:38.616885    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:37:38.616889    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:37:38.641742    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:37:38.641756    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:37:38.667759    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:37:38.667769    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:37:38.680154    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:37:38.680165    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:37:38.695956    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:37:38.695967    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:37:38.707575    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:37:38.707584    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:37:38.730906    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:37:38.730916    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:37:38.758276    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:37:38.758291    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:37:38.826648    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:37:38.826662    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:37:38.841052    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:37:38.841068    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:37:38.841096    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:37:38.841100    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:37:38.841103    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:37:38.841107    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:37:38.841110    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:37:48.844888    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:37:53.847695    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:37:53.848288    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:37:53.888681    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:37:53.888858    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:37:53.911070    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:37:53.911205    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:37:53.926405    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:37:53.926501    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:37:53.938720    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:37:53.938804    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:37:53.949205    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:37:53.949294    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:37:53.959927    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:37:53.959998    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:37:53.970159    5001 logs.go:276] 0 containers: []
	W0927 10:37:53.970170    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:37:53.970246    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:37:53.980528    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:37:53.980545    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:37:53.980551    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:37:54.009536    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:37:54.009546    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:37:54.023789    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:37:54.023808    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:37:54.042452    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:37:54.042462    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:37:54.061276    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:37:54.061287    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:37:54.074390    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:37:54.074403    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:37:54.078988    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:37:54.078994    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:37:54.096366    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:37:54.096378    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:37:54.107861    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:37:54.107875    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:37:54.119402    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:37:54.119414    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:37:54.158836    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:37:54.158933    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:37:54.159950    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:37:54.159957    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:37:54.178791    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:37:54.178803    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:37:54.192650    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:37:54.192661    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:37:54.205554    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:37:54.205563    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:37:54.242434    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:37:54.242445    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:37:54.260222    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:37:54.260232    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:37:54.271808    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:37:54.271819    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:37:54.271846    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:37:54.271851    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:37:54.271855    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:37:54.271859    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:37:54.271862    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:04.275912    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:38:09.278580    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:38:09.279049    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:38:09.308848    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:38:09.309007    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:38:09.328468    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:38:09.328584    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:38:09.344319    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:38:09.344411    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:38:09.355523    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:38:09.355610    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:38:09.366350    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:38:09.366438    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:38:09.377176    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:38:09.377255    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:38:09.387345    5001 logs.go:276] 0 containers: []
	W0927 10:38:09.387356    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:38:09.387419    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:38:09.397845    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:38:09.397862    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:38:09.397867    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:38:09.411710    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:38:09.411719    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:38:09.422961    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:38:09.422974    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:38:09.448104    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:38:09.448114    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:38:09.463619    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:38:09.463632    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:38:09.477935    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:38:09.477945    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:38:09.517657    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:09.517757    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:09.518744    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:38:09.518748    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:38:09.523502    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:38:09.523510    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:38:09.558498    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:38:09.558507    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:38:09.576017    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:38:09.576031    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:38:09.595066    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:38:09.595076    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:38:09.606254    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:38:09.606266    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:38:09.620223    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:38:09.620232    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:38:09.636983    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:38:09.636993    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:38:09.651668    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:38:09.651678    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:38:09.662739    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:38:09.662752    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:38:09.673913    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:09.673925    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:38:09.673950    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:38:09.673955    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:09.673958    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:09.673961    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:09.673964    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:19.678045    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:38:24.680755    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:38:24.681185    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:38:24.713238    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:38:24.713388    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:38:24.735211    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:38:24.735321    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:38:24.749248    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:38:24.749327    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:38:24.762816    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:38:24.762920    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:38:24.774470    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:38:24.774546    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:38:24.785496    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:38:24.785580    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:38:24.795649    5001 logs.go:276] 0 containers: []
	W0927 10:38:24.795660    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:38:24.795726    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:38:24.806144    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:38:24.806164    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:38:24.806169    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:38:24.817558    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:38:24.817568    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:38:24.843349    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:38:24.843356    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:38:24.881234    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:24.881331    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:24.882315    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:38:24.882319    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:38:24.905015    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:38:24.905024    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:38:24.916028    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:38:24.916039    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:38:24.930977    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:38:24.930990    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:38:24.942726    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:38:24.942738    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:38:24.961901    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:38:24.961912    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:38:24.979239    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:38:24.979251    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:38:24.991379    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:38:24.991390    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:38:25.011032    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:38:25.011040    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:38:25.015560    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:38:25.015567    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:38:25.051536    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:38:25.051550    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:38:25.062923    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:38:25.062937    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:38:25.076183    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:38:25.076197    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:38:25.097179    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:25.097189    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:38:25.097213    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:38:25.097220    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:25.097223    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:25.097226    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:25.097229    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:35.101235    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:38:40.103798    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:38:40.104308    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:38:40.143585    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:38:40.143743    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:38:40.168297    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:38:40.168429    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:38:40.185262    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:38:40.185354    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:38:40.196611    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:38:40.196699    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:38:40.215514    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:38:40.215599    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:38:40.228364    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:38:40.228439    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:38:40.238309    5001 logs.go:276] 0 containers: []
	W0927 10:38:40.238326    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:38:40.238385    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:38:40.248808    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:38:40.248826    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:38:40.248832    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:38:40.285909    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:38:40.285922    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:38:40.300600    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:38:40.300612    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:38:40.323750    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:38:40.323759    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:38:40.338675    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:38:40.338686    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:38:40.357019    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:38:40.357027    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:38:40.374857    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:38:40.374866    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:38:40.400076    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:38:40.400086    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:38:40.412320    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:38:40.412331    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:38:40.424852    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:38:40.424860    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:38:40.438412    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:38:40.438421    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:38:40.475080    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:40.475178    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:40.476167    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:38:40.476172    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:38:40.480474    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:38:40.480481    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:38:40.497591    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:38:40.497601    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:38:40.511082    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:38:40.511094    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:38:40.522351    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:38:40.522360    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:38:40.534056    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:40.534066    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:38:40.534094    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:38:40.534099    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:40.534102    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:40.534105    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:40.534108    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:50.537040    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:38:55.539507    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:38:55.539720    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:38:55.560827    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:38:55.560917    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:38:55.571710    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:38:55.571800    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:38:55.582600    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:38:55.582685    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:38:55.594337    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:38:55.594418    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:38:55.605133    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:38:55.605212    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:38:55.619101    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:38:55.619175    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:38:55.629365    5001 logs.go:276] 0 containers: []
	W0927 10:38:55.629377    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:38:55.629438    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:38:55.640050    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:38:55.640068    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:38:55.640073    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:38:55.652798    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:38:55.652809    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:38:55.664555    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:38:55.664566    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:38:55.690475    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:38:55.690483    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:38:55.702825    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:38:55.702839    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:38:55.738751    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:38:55.738766    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:38:55.752949    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:38:55.752959    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:38:55.769615    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:38:55.769633    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:38:55.782130    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:38:55.782143    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:38:55.800329    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:38:55.800339    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:38:55.819088    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:38:55.819103    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:38:55.830570    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:38:55.830580    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:38:55.868670    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:55.868768    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:55.869785    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:38:55.869789    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:38:55.884175    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:38:55.884188    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:38:55.904202    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:38:55.904214    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:38:55.908645    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:38:55.908653    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:38:55.920712    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:55.920723    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:38:55.920750    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:38:55.920755    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:38:55.920758    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:38:55.920761    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:55.920772    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:05.924696    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:39:10.927282    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:39:10.927425    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:39:10.943014    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:39:10.943104    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:39:10.955792    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:39:10.955882    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:39:10.970159    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:39:10.970284    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:39:10.982603    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:39:10.982696    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:39:10.994934    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:39:10.995025    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:39:11.006984    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:39:11.007083    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:39:11.019300    5001 logs.go:276] 0 containers: []
	W0927 10:39:11.019313    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:39:11.019393    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:39:11.031581    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:39:11.031599    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:39:11.031604    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:39:11.045147    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:39:11.045158    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:39:11.072687    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:39:11.072753    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:39:11.088700    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:39:11.088719    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:39:11.110522    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:39:11.110545    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:39:11.126717    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:39:11.126732    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:39:11.168476    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:39:11.168490    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:39:11.187007    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:39:11.187019    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:39:11.207647    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:39:11.207661    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:39:11.226513    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:39:11.226527    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:39:11.240221    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:39:11.240233    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:39:11.282728    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:11.282832    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:11.283885    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:39:11.283894    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:39:11.290900    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:39:11.290914    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:39:11.305430    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:39:11.305443    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:39:11.325150    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:39:11.325163    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:39:11.338642    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:39:11.338655    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:39:11.352364    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:11.352377    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:39:11.352402    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:39:11.352407    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:11.352411    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:11.352419    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:11.352423    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:21.354002    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:39:26.356288    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:39:26.356482    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:39:26.374428    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:39:26.374542    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:39:26.387506    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:39:26.387588    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:39:26.399200    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:39:26.399284    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:39:26.410303    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:39:26.410384    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:39:26.420587    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:39:26.420669    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:39:26.431408    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:39:26.431498    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:39:26.445658    5001 logs.go:276] 0 containers: []
	W0927 10:39:26.445670    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:39:26.445744    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:39:26.456804    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:39:26.456823    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:39:26.456830    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:39:26.461032    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:39:26.461038    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:39:26.473072    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:39:26.473085    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:39:26.487580    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:39:26.487591    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:39:26.504749    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:39:26.504761    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:39:26.517698    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:39:26.517710    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:39:26.531176    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:39:26.531186    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:39:26.572528    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:26.572635    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:26.573688    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:39:26.573695    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:39:26.592475    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:39:26.592491    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:39:26.604837    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:39:26.604848    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:39:26.650322    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:39:26.650338    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:39:26.674350    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:39:26.674366    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:39:26.689304    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:39:26.689318    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:39:26.709326    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:39:26.709340    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:39:26.729284    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:39:26.729298    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:39:26.741350    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:39:26.741363    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:39:26.766893    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:26.766911    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:39:26.766946    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:39:26.766953    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:26.766957    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:26.766961    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:26.766973    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:36.770865    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:39:41.771015    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:39:41.771160    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:39:41.783603    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:39:41.783697    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:39:41.815133    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:39:41.815226    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:39:41.830660    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:39:41.830760    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:39:41.845327    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:39:41.845413    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:39:41.857753    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:39:41.857845    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:39:41.870023    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:39:41.870111    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:39:41.881822    5001 logs.go:276] 0 containers: []
	W0927 10:39:41.881837    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:39:41.881916    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:39:41.894131    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:39:41.894152    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:39:41.894158    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:39:41.907698    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:39:41.907709    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:39:41.925204    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:39:41.925225    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:39:41.939722    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:39:41.939737    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:39:41.944590    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:39:41.944603    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:39:41.961203    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:39:41.961217    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:39:41.974614    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:39:41.974626    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:39:41.999810    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:39:41.999826    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:39:42.015748    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:39:42.015763    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:39:42.035039    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:39:42.035058    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:39:42.056770    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:39:42.056784    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:39:42.072645    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:39:42.072659    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:39:42.111881    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:39:42.111893    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:39:42.129899    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:39:42.129911    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:39:42.142570    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:39:42.142584    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:39:42.155137    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:39:42.155148    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:39:42.193255    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:42.193364    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:42.194416    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:42.194421    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:39:42.194451    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:39:42.194456    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:42.194460    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:42.194463    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:42.194466    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:52.198279    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:39:57.198418    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:39:57.198527    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:39:57.210528    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:39:57.210618    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:39:57.221854    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:39:57.221933    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:39:57.234756    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:39:57.234840    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:39:57.252000    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:39:57.252082    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:39:57.265820    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:39:57.265900    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:39:57.279315    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:39:57.279394    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:39:57.293090    5001 logs.go:276] 0 containers: []
	W0927 10:39:57.293102    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:39:57.293177    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:39:57.305524    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:39:57.305544    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:39:57.305551    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:39:57.326647    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:39:57.326660    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:39:57.344241    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:39:57.344254    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:39:57.359238    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:39:57.359248    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:39:57.373297    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:39:57.373308    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:39:57.399035    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:39:57.399047    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:39:57.412565    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:39:57.412582    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:39:57.430123    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:39:57.430137    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:39:57.453496    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:39:57.453512    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:39:57.468795    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:39:57.468809    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:39:57.482743    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:39:57.482754    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:39:57.523938    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:57.524041    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:57.525098    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:39:57.525106    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:39:57.529943    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:39:57.529953    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:39:57.571554    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:39:57.571567    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:39:57.592126    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:39:57.592141    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:39:57.611364    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:39:57.611381    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:39:57.631618    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:57.631631    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:39:57.631661    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:39:57.631666    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:57.631670    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:57.631692    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:57.631697    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:40:07.635560    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:12.637651    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:12.637824    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:40:12.648456    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:40:12.648549    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:40:12.659207    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:40:12.659300    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:40:12.672672    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:40:12.672754    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:40:12.682761    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:40:12.682843    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:40:12.693473    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:40:12.693544    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:40:12.703972    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:40:12.704065    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:40:12.713973    5001 logs.go:276] 0 containers: []
	W0927 10:40:12.713983    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:40:12.714053    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:40:12.725251    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:40:12.725266    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:40:12.725271    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:40:12.737537    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:40:12.737546    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:40:12.777006    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:40:12.777017    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:40:12.791081    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:40:12.791092    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:40:12.803480    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:40:12.803490    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:40:12.818428    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:40:12.818439    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:40:12.830469    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:40:12.830479    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:40:12.855434    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:40:12.855445    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:40:12.893819    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:12.893917    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:12.894970    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:40:12.894979    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:40:12.909391    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:40:12.909406    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:40:12.928309    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:40:12.928322    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:40:12.945979    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:40:12.945994    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:40:12.957389    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:40:12.957398    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:40:12.975584    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:40:12.975597    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:40:12.980140    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:40:12.980146    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:40:12.991309    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:40:12.991320    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:40:13.003039    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:13.003051    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:40:13.003078    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:40:13.003083    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:13.003088    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:13.003091    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:13.003500    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:40:23.007415    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:28.008677    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:28.009025    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:40:28.035319    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:40:28.035468    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:40:28.054090    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:40:28.054191    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:40:28.067991    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:40:28.068081    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:40:28.080951    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:40:28.081032    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:40:28.091728    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:40:28.091806    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:40:28.101964    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:40:28.102033    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:40:28.112203    5001 logs.go:276] 0 containers: []
	W0927 10:40:28.112214    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:40:28.112275    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:40:28.122599    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:40:28.122614    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:40:28.122620    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:40:28.159420    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:40:28.159437    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:40:28.178707    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:40:28.178717    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:40:28.191032    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:40:28.191043    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:40:28.203269    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:40:28.203280    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:40:28.240235    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:28.240339    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:28.241357    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:40:28.241363    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:40:28.245857    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:40:28.245868    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:40:28.263604    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:40:28.263615    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:40:28.279456    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:40:28.279464    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:40:28.290580    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:40:28.290590    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:40:28.302877    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:40:28.302892    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:40:28.318034    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:40:28.318050    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:40:28.341240    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:40:28.341248    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:40:28.354306    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:40:28.354317    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:40:28.369636    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:40:28.369651    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:40:28.387698    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:40:28.387709    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:40:28.398984    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:28.398997    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:40:28.399024    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:40:28.399028    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:28.399032    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:28.399036    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:28.399039    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:40:38.402972    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:43.403790    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:43.403858    5001 kubeadm.go:597] duration metric: took 4m6.773253959s to restartPrimaryControlPlane
	W0927 10:40:43.403924    5001 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 10:40:43.403953    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0927 10:40:44.339694    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 10:40:44.344685    5001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:40:44.347354    5001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:40:44.350159    5001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 10:40:44.350166    5001 kubeadm.go:157] found existing configuration files:
	
	I0927 10:40:44.350197    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/admin.conf
	I0927 10:40:44.352758    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 10:40:44.352787    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 10:40:44.355219    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/kubelet.conf
	I0927 10:40:44.358279    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 10:40:44.358306    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 10:40:44.361424    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/controller-manager.conf
	I0927 10:40:44.363773    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 10:40:44.363799    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:40:44.366558    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/scheduler.conf
	I0927 10:40:44.369798    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 10:40:44.369828    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:40:44.372728    5001 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 10:40:44.390846    5001 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0927 10:40:44.390952    5001 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 10:40:44.438354    5001 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 10:40:44.438489    5001 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 10:40:44.438544    5001 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 10:40:44.492585    5001 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 10:40:44.496735    5001 out.go:235]   - Generating certificates and keys ...
	I0927 10:40:44.496770    5001 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 10:40:44.496810    5001 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 10:40:44.496856    5001 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 10:40:44.496900    5001 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 10:40:44.496936    5001 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 10:40:44.496967    5001 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 10:40:44.497003    5001 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 10:40:44.497041    5001 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 10:40:44.497079    5001 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 10:40:44.497117    5001 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 10:40:44.497136    5001 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 10:40:44.497161    5001 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 10:40:44.725094    5001 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 10:40:44.786901    5001 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 10:40:44.961939    5001 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 10:40:45.180112    5001 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 10:40:45.208490    5001 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 10:40:45.209714    5001 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 10:40:45.209746    5001 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 10:40:45.283048    5001 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 10:40:45.287163    5001 out.go:235]   - Booting up control plane ...
	I0927 10:40:45.287212    5001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 10:40:45.287255    5001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 10:40:45.287317    5001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 10:40:45.287406    5001 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 10:40:45.287511    5001 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 10:40:49.788138    5001 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504298 seconds
	I0927 10:40:49.788246    5001 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 10:40:49.793947    5001 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 10:40:50.320643    5001 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 10:40:50.320989    5001 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-198000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 10:40:50.828655    5001 kubeadm.go:310] [bootstrap-token] Using token: jrf2xd.lubh65ru8b16tcp9
	I0927 10:40:50.835106    5001 out.go:235]   - Configuring RBAC rules ...
	I0927 10:40:50.835211    5001 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 10:40:50.835289    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 10:40:50.838485    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 10:40:50.839814    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 10:40:50.841044    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 10:40:50.842611    5001 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 10:40:50.847438    5001 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 10:40:51.029881    5001 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 10:40:51.233472    5001 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 10:40:51.234007    5001 kubeadm.go:310] 
	I0927 10:40:51.234047    5001 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 10:40:51.234051    5001 kubeadm.go:310] 
	I0927 10:40:51.234094    5001 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 10:40:51.234099    5001 kubeadm.go:310] 
	I0927 10:40:51.234113    5001 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 10:40:51.234156    5001 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 10:40:51.234185    5001 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 10:40:51.234189    5001 kubeadm.go:310] 
	I0927 10:40:51.234219    5001 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 10:40:51.234226    5001 kubeadm.go:310] 
	I0927 10:40:51.234274    5001 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 10:40:51.234279    5001 kubeadm.go:310] 
	I0927 10:40:51.234325    5001 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 10:40:51.234381    5001 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 10:40:51.234431    5001 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 10:40:51.234436    5001 kubeadm.go:310] 
	I0927 10:40:51.234492    5001 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 10:40:51.234539    5001 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 10:40:51.234541    5001 kubeadm.go:310] 
	I0927 10:40:51.234591    5001 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jrf2xd.lubh65ru8b16tcp9 \
	I0927 10:40:51.234645    5001 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 \
	I0927 10:40:51.234659    5001 kubeadm.go:310] 	--control-plane 
	I0927 10:40:51.234663    5001 kubeadm.go:310] 
	I0927 10:40:51.234705    5001 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 10:40:51.234710    5001 kubeadm.go:310] 
	I0927 10:40:51.234759    5001 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jrf2xd.lubh65ru8b16tcp9 \
	I0927 10:40:51.234826    5001 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 
	I0927 10:40:51.234899    5001 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 10:40:51.234912    5001 cni.go:84] Creating CNI manager for ""
	I0927 10:40:51.234922    5001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:40:51.238865    5001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 10:40:51.244816    5001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 10:40:51.247924    5001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 10:40:51.254912    5001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 10:40:51.254968    5001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 10:40:51.254986    5001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-198000 minikube.k8s.io/updated_at=2024_09_27T10_40_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=running-upgrade-198000 minikube.k8s.io/primary=true
	I0927 10:40:51.298093    5001 ops.go:34] apiserver oom_adj: -16
	I0927 10:40:51.298131    5001 kubeadm.go:1113] duration metric: took 43.216625ms to wait for elevateKubeSystemPrivileges
	I0927 10:40:51.298139    5001 kubeadm.go:394] duration metric: took 4m14.68234525s to StartCluster
	I0927 10:40:51.298150    5001 settings.go:142] acquiring lock: {Name:mk58fc55a93399a03fb1c9ac710554db41068524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:40:51.298241    5001 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:40:51.298649    5001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:40:51.298879    5001 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:40:51.298884    5001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 10:40:51.298922    5001 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-198000"
	I0927 10:40:51.298937    5001 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-198000"
	W0927 10:40:51.298942    5001 addons.go:243] addon storage-provisioner should already be in state true
	I0927 10:40:51.298952    5001 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-198000"
	I0927 10:40:51.298961    5001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-198000"
	I0927 10:40:51.298952    5001 host.go:66] Checking if "running-upgrade-198000" exists ...
	I0927 10:40:51.298991    5001 config.go:182] Loaded profile config "running-upgrade-198000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:40:51.299798    5001 kapi.go:59] client config for running-upgrade-198000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/client.key", CAFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1028f65d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 10:40:51.299920    5001 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-198000"
	W0927 10:40:51.299924    5001 addons.go:243] addon default-storageclass should already be in state true
	I0927 10:40:51.299931    5001 host.go:66] Checking if "running-upgrade-198000" exists ...
	I0927 10:40:51.302813    5001 out.go:177] * Verifying Kubernetes components...
	I0927 10:40:51.303153    5001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 10:40:51.305951    5001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 10:40:51.305959    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	I0927 10:40:51.308842    5001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:40:51.311868    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:40:51.317845    5001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:40:51.317852    5001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 10:40:51.317859    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	I0927 10:40:51.385406    5001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:40:51.391157    5001 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:40:51.391208    5001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:40:51.395290    5001 api_server.go:72] duration metric: took 96.402459ms to wait for apiserver process to appear ...
	I0927 10:40:51.395297    5001 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:40:51.395305    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:51.426185    5001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 10:40:51.450574    5001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:40:51.751205    5001 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 10:40:51.751218    5001 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 10:40:56.397389    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:56.397504    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:01.398318    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:01.398343    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:06.398984    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:06.399005    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:11.399573    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:11.399624    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:16.400437    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:16.400477    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:21.401545    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:21.401588    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0927 10:41:21.752697    5001 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0927 10:41:21.756830    5001 out.go:177] * Enabled addons: storage-provisioner
	I0927 10:41:21.765596    5001 addons.go:510] duration metric: took 30.467502958s for enable addons: enabled=[storage-provisioner]
	I0927 10:41:26.402968    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:26.403001    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:31.404717    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:31.404741    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:36.406785    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:36.406816    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:41.408869    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:41.408922    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:46.411092    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:46.411132    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:51.413322    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:51.413583    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:51.441548    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:41:51.441679    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:51.459651    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:41:51.459771    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:51.491522    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:41:51.491609    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:51.508125    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:41:51.508212    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:51.519628    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:41:51.519721    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:51.530048    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:41:51.530120    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:51.540035    5001 logs.go:276] 0 containers: []
	W0927 10:41:51.540046    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:51.540121    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:51.551397    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:41:51.551412    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:41:51.551417    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:41:51.566279    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:41:51.566289    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:41:51.578353    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:41:51.578369    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:41:51.590135    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:41:51.590145    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:41:51.604498    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:41:51.604511    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:41:51.621751    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:41:51.621761    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:41:51.633200    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:51.633210    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:41:51.651769    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:41:51.651867    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:41:51.668497    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:41:51.668504    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:41:51.682937    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:51.682947    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:51.707491    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:41:51.707498    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:41:51.718995    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:41:51.719009    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:51.730090    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:51.730102    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:51.734755    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:51.734762    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:51.773031    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:41:51.773044    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:41:51.773072    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:41:51.773076    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:41:51.773080    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:41:51.773085    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:41:51.773088    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:01.776979    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:06.777982    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:06.778173    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:06.794530    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:06.794629    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:06.808384    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:06.808475    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:06.819469    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:06.819551    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:06.829757    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:06.829838    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:06.839874    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:06.839949    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:06.850581    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:06.850660    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:06.861110    5001 logs.go:276] 0 containers: []
	W0927 10:42:06.861121    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:06.861185    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:06.871185    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:06.871202    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:06.871207    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:06.883182    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:06.883197    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:06.905933    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:06.905948    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:06.917528    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:06.917542    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:06.942672    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:06.942679    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:06.947401    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:06.947410    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:06.961433    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:06.961444    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:06.974750    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:06.974759    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:06.986077    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:06.986087    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:06.997783    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:06.997792    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:07.014537    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:07.014634    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:07.031357    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:07.031362    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:07.069520    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:07.069531    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:07.081067    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:07.081078    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:07.096666    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:07.096683    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:07.096711    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:42:07.096717    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:07.096720    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:07.096728    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:07.096731    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:17.099802    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:22.101356    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:22.101575    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:22.124892    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:22.124993    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:22.137947    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:22.138027    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:22.149462    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:22.149552    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:22.160021    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:22.160092    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:22.170332    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:22.170408    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:22.180977    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:22.181065    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:22.190960    5001 logs.go:276] 0 containers: []
	W0927 10:42:22.190974    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:22.191044    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:22.202242    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:22.202260    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:22.202266    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:22.213837    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:22.213850    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:22.225571    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:22.225581    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:22.230215    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:22.230224    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:22.273023    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:22.273037    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:22.287337    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:22.287347    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:22.303950    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:22.303962    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:22.315629    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:22.315642    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:22.333600    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:22.333609    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:22.357241    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:22.357249    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:22.368599    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:22.368610    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:22.385646    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:22.385744    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:22.402374    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:22.402380    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:22.416547    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:22.416557    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:22.427666    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:22.427677    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:22.427704    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:42:22.427709    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:22.427712    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:22.427717    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:22.427719    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:32.431655    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:37.434260    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:37.434723    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:37.477761    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:37.477913    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:37.496772    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:37.496875    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:37.511347    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:37.511427    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:37.523361    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:37.523432    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:37.534086    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:37.534159    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:37.545300    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:37.545387    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:37.555501    5001 logs.go:276] 0 containers: []
	W0927 10:42:37.555513    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:37.555583    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:37.566523    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:37.566541    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:37.566547    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:37.589938    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:37.589947    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:37.602500    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:37.602510    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:37.619312    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:37.619411    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:37.635758    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:37.635765    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:37.672634    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:37.672645    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:37.690095    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:37.690106    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:37.701994    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:37.702006    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:37.717206    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:37.717216    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:37.729080    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:37.729091    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:37.733480    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:37.733486    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:37.748578    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:37.748588    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:37.760152    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:37.760164    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:37.772091    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:37.772101    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:37.795321    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:37.795330    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:37.795354    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:42:37.795359    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:37.795362    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:37.795366    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:37.795383    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:47.799337    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:52.802019    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:52.802352    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:52.828585    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:52.828745    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:52.846487    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:52.846596    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:52.860125    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:52.860213    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:52.871413    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:52.871496    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:52.882449    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:52.882528    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:52.893030    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:52.893108    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:52.905714    5001 logs.go:276] 0 containers: []
	W0927 10:42:52.905726    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:52.905802    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:52.916269    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:52.916285    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:52.916291    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:52.951794    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:52.951809    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:52.966104    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:52.966118    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:52.984255    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:52.984269    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:52.996182    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:52.996196    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:53.013691    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:53.013706    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:53.029458    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:53.029473    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:53.034223    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:53.034231    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:53.048278    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:53.048292    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:53.059671    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:53.059685    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:53.077150    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:53.077163    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:53.100174    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:53.100181    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:53.114387    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:53.114399    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:53.131523    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:53.131621    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:53.148485    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:53.148491    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:53.148518    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:42:53.148523    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:53.148525    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:53.148529    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:53.148532    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:03.152471    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:08.154671    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:08.155189    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:08.195741    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:08.195877    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:08.212859    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:08.212957    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:08.226488    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:08.226581    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:08.237863    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:08.237939    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:08.247966    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:08.248055    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:08.258231    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:08.258307    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:08.268712    5001 logs.go:276] 0 containers: []
	W0927 10:43:08.268723    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:08.268793    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:08.279358    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:08.279375    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:08.279380    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:08.294495    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:08.294504    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:08.306490    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:08.306502    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:08.319920    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:08.319933    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:08.336474    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:08.336571    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:08.353529    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:08.353535    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:08.370640    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:08.370652    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:08.382655    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:08.382666    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:08.394449    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:08.394464    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:08.408985    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:08.408994    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:08.420403    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:08.420416    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:08.432194    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:08.432203    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:08.436596    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:08.436604    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:08.470633    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:08.470644    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:08.495602    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:08.495611    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:08.506322    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:08.506332    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:08.530731    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:08.530744    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:08.530767    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:43:08.530772    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:08.530775    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:08.530779    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:08.530782    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:18.532973    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:23.535213    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:23.535450    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:23.552544    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:23.552646    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:23.565464    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:23.565553    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:23.577002    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:23.577086    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:23.587669    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:23.587756    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:23.597845    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:23.597928    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:23.608360    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:23.608438    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:23.618390    5001 logs.go:276] 0 containers: []
	W0927 10:43:23.618401    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:23.618471    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:23.628957    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:23.628974    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:23.628979    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:23.645486    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:23.645583    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:23.662177    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:23.662185    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:23.679988    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:23.680002    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:23.691926    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:23.691938    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:23.704637    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:23.704649    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:23.720280    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:23.720294    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:23.737936    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:23.737949    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:23.753319    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:23.753328    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:23.758233    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:23.758240    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:23.772654    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:23.772667    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:23.784529    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:23.784542    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:23.798542    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:23.798552    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:23.810240    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:23.810252    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:23.822403    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:23.822414    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:23.858105    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:23.858120    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:23.883070    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:23.883079    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:23.883109    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:43:23.883115    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:23.883124    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:23.883127    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:23.883130    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:33.887032    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:38.889197    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:38.889486    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:38.914624    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:38.914751    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:38.933088    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:38.933181    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:38.947290    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:38.947381    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:38.958717    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:38.958800    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:38.969333    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:38.969409    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:38.979556    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:38.979637    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:38.989643    5001 logs.go:276] 0 containers: []
	W0927 10:43:38.989657    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:38.989731    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:39.000338    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:39.000356    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:39.000361    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:39.014888    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:39.014903    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:39.025918    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:39.025931    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:39.037461    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:39.037470    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:39.049503    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:39.049513    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:39.084922    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:39.084933    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:39.098986    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:39.098997    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:39.111722    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:39.111738    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:39.128201    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:39.128541    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:39.146052    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:39.146060    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:39.150830    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:39.150839    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:39.165384    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:39.165394    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:39.180227    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:39.180237    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:39.197228    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:39.197239    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:39.209384    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:39.209394    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:39.234471    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:39.234483    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:39.246162    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:39.246173    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:39.246202    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:43:39.246207    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:39.246220    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:39.246224    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:39.246228    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:49.250106    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:54.252395    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:54.252866    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:54.287569    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:54.287724    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:54.305376    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:54.305493    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:54.319162    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:54.319261    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:54.332325    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:54.332411    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:54.343332    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:54.343414    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:54.354270    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:54.354350    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:54.365272    5001 logs.go:276] 0 containers: []
	W0927 10:43:54.365283    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:54.365353    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:54.376503    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:54.376520    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:54.376524    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:54.391197    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:54.391211    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:54.406507    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:54.406520    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:54.424216    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:54.424232    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:54.443849    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:54.443861    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:54.449195    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:54.449208    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:54.485746    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:54.485760    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:54.497788    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:54.497799    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:54.522958    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:54.522966    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:54.534446    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:54.534459    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:54.548640    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:54.548650    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:54.560739    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:54.560752    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:54.573020    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:54.573031    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:54.584892    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:54.584905    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:54.601945    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:54.602042    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:54.618638    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:54.618645    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:54.630345    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:54.630358    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:54.630389    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:43:54.630394    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:54.630398    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:54.630401    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:54.630405    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:44:04.634234    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:09.636319    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:09.636421    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:44:09.647440    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:44:09.647514    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:44:09.658688    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:44:09.658764    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:44:09.669163    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:44:09.669252    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:44:09.679882    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:44:09.679960    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:44:09.690567    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:44:09.690645    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:44:09.701260    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:44:09.701339    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:44:09.712401    5001 logs.go:276] 0 containers: []
	W0927 10:44:09.712414    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:44:09.712487    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:44:09.723473    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:44:09.723489    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:44:09.723494    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:44:09.736442    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:44:09.736453    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:44:09.808727    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:44:09.808739    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:44:09.823605    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:44:09.823621    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:44:09.844560    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:44:09.844573    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:44:09.855735    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:44:09.855748    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:44:09.867492    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:44:09.867504    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:44:09.891513    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:44:09.891520    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:44:09.902860    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:44:09.902872    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:44:09.919419    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:09.919517    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:09.936057    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:44:09.936062    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:44:09.940769    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:44:09.940777    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:44:09.952509    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:44:09.952519    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:44:09.964363    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:44:09.964372    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:44:09.982878    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:44:09.982892    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:44:10.000777    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:44:10.000785    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:44:10.012365    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:10.012376    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:44:10.012401    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:44:10.012405    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:10.012408    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:10.012411    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:10.012414    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:44:20.016309    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:25.018401    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:25.018569    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:44:25.034015    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:44:25.034106    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:44:25.044730    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:44:25.044811    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:44:25.055656    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:44:25.055744    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:44:25.066738    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:44:25.066826    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:44:25.077587    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:44:25.077673    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:44:25.088383    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:44:25.088465    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:44:25.102690    5001 logs.go:276] 0 containers: []
	W0927 10:44:25.102701    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:44:25.102779    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:44:25.113271    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:44:25.113290    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:44:25.113296    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:44:25.125114    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:44:25.125126    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:44:25.137095    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:44:25.137105    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:44:25.174902    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:44:25.174912    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:44:25.186704    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:44:25.186715    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:44:25.206702    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:44:25.206713    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:44:25.224160    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:44:25.224170    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:44:25.235709    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:44:25.235720    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:44:25.260129    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:44:25.260137    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:44:25.264414    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:44:25.264424    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:44:25.275893    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:44:25.275903    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:44:25.288180    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:44:25.288194    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:44:25.299819    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:44:25.299829    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:44:25.316773    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:25.316871    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:25.333413    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:44:25.333419    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:44:25.348150    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:44:25.348159    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:44:25.371639    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:25.371649    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:44:25.371677    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:44:25.371683    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:25.371686    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:25.371689    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:25.371692    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:44:35.374315    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:40.376478    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:40.376674    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:44:40.395466    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:44:40.395567    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:44:40.408654    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:44:40.408742    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:44:40.419336    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:44:40.419426    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:44:40.430132    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:44:40.430213    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:44:40.440895    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:44:40.440975    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:44:40.451664    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:44:40.451750    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:44:40.461873    5001 logs.go:276] 0 containers: []
	W0927 10:44:40.461883    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:44:40.461945    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:44:40.472435    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:44:40.472451    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:44:40.472456    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:44:40.476939    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:44:40.476946    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:44:40.488881    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:44:40.488890    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:44:40.503629    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:44:40.503639    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:44:40.526351    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:44:40.526362    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:44:40.538639    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:44:40.538648    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:44:40.553064    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:44:40.553073    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:44:40.564624    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:44:40.564634    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:44:40.576208    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:44:40.576218    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:44:40.587486    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:44:40.587495    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:44:40.624857    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:44:40.624867    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:44:40.639939    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:44:40.639949    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:44:40.651475    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:44:40.651486    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:44:40.675393    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:44:40.675411    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:44:40.695332    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:40.695438    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:40.712483    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:44:40.712493    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:44:40.724071    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:40.724082    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:44:40.724107    5001 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 10:44:40.724112    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:40.724115    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	  Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:40.724118    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:40.724121    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:44:50.726282    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:55.728580    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:55.732126    5001 out.go:201] 
	W0927 10:44:55.736136    5001 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0927 10:44:55.736150    5001 out.go:270] * 
	* 
	W0927 10:44:55.737147    5001 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:44:55.748094    5001 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-198000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-27 10:44:55.825475 -0700 PDT m=+2997.216303293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-198000 -n running-upgrade-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-198000 -n running-upgrade-198000: exit status 2 (15.687202584s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-198000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-706000          | force-systemd-flag-706000 | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-679000              | force-systemd-env-679000  | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-679000           | force-systemd-env-679000  | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT | 27 Sep 24 10:35 PDT |
	| start   | -p docker-flags-126000                | docker-flags-126000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-706000             | force-systemd-flag-706000 | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-706000          | force-systemd-flag-706000 | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT | 27 Sep 24 10:35 PDT |
	| start   | -p cert-expiration-754000             | cert-expiration-754000    | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-126000 ssh               | docker-flags-126000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-126000 ssh               | docker-flags-126000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-126000                | docker-flags-126000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT | 27 Sep 24 10:35 PDT |
	| start   | -p cert-options-200000                | cert-options-200000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-200000 ssh               | cert-options-200000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-200000 -- sudo        | cert-options-200000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-200000                | cert-options-200000       | jenkins | v1.34.0 | 27 Sep 24 10:35 PDT | 27 Sep 24 10:35 PDT |
	| start   | -p running-upgrade-198000             | minikube                  | jenkins | v1.26.0 | 27 Sep 24 10:35 PDT | 27 Sep 24 10:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-198000             | running-upgrade-198000    | jenkins | v1.34.0 | 27 Sep 24 10:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-754000             | cert-expiration-754000    | jenkins | v1.34.0 | 27 Sep 24 10:38 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-754000             | cert-expiration-754000    | jenkins | v1.34.0 | 27 Sep 24 10:38 PDT | 27 Sep 24 10:38 PDT |
	| start   | -p kubernetes-upgrade-768000          | kubernetes-upgrade-768000 | jenkins | v1.34.0 | 27 Sep 24 10:38 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-768000          | kubernetes-upgrade-768000 | jenkins | v1.34.0 | 27 Sep 24 10:38 PDT | 27 Sep 24 10:38 PDT |
	| start   | -p kubernetes-upgrade-768000          | kubernetes-upgrade-768000 | jenkins | v1.34.0 | 27 Sep 24 10:38 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-768000          | kubernetes-upgrade-768000 | jenkins | v1.34.0 | 27 Sep 24 10:38 PDT | 27 Sep 24 10:38 PDT |
	| start   | -p stopped-upgrade-862000             | minikube                  | jenkins | v1.26.0 | 27 Sep 24 10:38 PDT | 27 Sep 24 10:39 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-862000 stop           | minikube                  | jenkins | v1.26.0 | 27 Sep 24 10:39 PDT | 27 Sep 24 10:39 PDT |
	| start   | -p stopped-upgrade-862000             | stopped-upgrade-862000    | jenkins | v1.34.0 | 27 Sep 24 10:39 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 10:39:30
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 10:39:30.163370    5160 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:39:30.163518    5160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:30.163521    5160 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:30.163524    5160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:30.163681    5160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:39:30.164772    5160 out.go:352] Setting JSON to false
	I0927 10:39:30.184007    5160 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4134,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:39:30.184085    5160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:39:30.188862    5160 out.go:177] * [stopped-upgrade-862000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:39:30.196780    5160 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:39:30.196817    5160 notify.go:220] Checking for updates...
	I0927 10:39:30.203879    5160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:39:30.206866    5160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:39:30.209902    5160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:39:30.212841    5160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:39:30.215821    5160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:39:30.219068    5160 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:39:30.222865    5160 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 10:39:30.225864    5160 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:39:30.229871    5160 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:39:30.236746    5160 start.go:297] selected driver: qemu2
	I0927 10:39:30.236750    5160 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:39:30.236798    5160 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:39:30.239440    5160 cni.go:84] Creating CNI manager for ""
	I0927 10:39:30.239474    5160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:39:30.239497    5160 start.go:340] cluster config:
	{Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:39:30.239544    5160 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:39:30.247702    5160 out.go:177] * Starting "stopped-upgrade-862000" primary control-plane node in "stopped-upgrade-862000" cluster
	I0927 10:39:30.251796    5160 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0927 10:39:30.251808    5160 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0927 10:39:30.251812    5160 cache.go:56] Caching tarball of preloaded images
	I0927 10:39:30.251855    5160 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:39:30.251860    5160 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0927 10:39:30.251910    5160 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/config.json ...
	I0927 10:39:30.252385    5160 start.go:360] acquireMachinesLock for stopped-upgrade-862000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:39:30.252411    5160 start.go:364] duration metric: took 20.625µs to acquireMachinesLock for "stopped-upgrade-862000"
	I0927 10:39:30.252418    5160 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:39:30.252422    5160 fix.go:54] fixHost starting: 
	I0927 10:39:30.252517    5160 fix.go:112] recreateIfNeeded on stopped-upgrade-862000: state=Stopped err=<nil>
	W0927 10:39:30.252525    5160 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:39:30.260780    5160 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-862000" ...
	I0927 10:39:26.356288    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:39:26.356482    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:39:26.374428    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:39:26.374542    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:39:26.387506    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:39:26.387588    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:39:26.399200    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:39:26.399284    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:39:26.410303    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:39:26.410384    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:39:26.420587    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:39:26.420669    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:39:26.431408    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:39:26.431498    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:39:26.445658    5001 logs.go:276] 0 containers: []
	W0927 10:39:26.445670    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:39:26.445744    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:39:26.456804    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:39:26.456823    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:39:26.456830    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:39:26.461032    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:39:26.461038    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:39:26.473072    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:39:26.473085    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:39:26.487580    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:39:26.487591    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:39:26.504749    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:39:26.504761    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:39:26.517698    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:39:26.517710    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:39:26.531176    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:39:26.531186    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:39:26.572528    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:26.572635    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:26.573688    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:39:26.573695    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:39:26.592475    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:39:26.592491    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:39:26.604837    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:39:26.604848    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:39:26.650322    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:39:26.650338    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:39:26.674350    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:39:26.674366    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:39:26.689304    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:39:26.689318    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:39:26.709326    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:39:26.709340    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:39:26.729284    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:39:26.729298    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:39:26.741350    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:39:26.741363    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:39:26.766893    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:26.766911    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:39:26.766946    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:39:26.766953    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:26.766957    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:26.766961    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:26.766973    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:30.264838    5160 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:39:30.264908    5160 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50492-:22,hostfwd=tcp::50493-:2376,hostname=stopped-upgrade-862000 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/disk.qcow2
	I0927 10:39:30.311158    5160 main.go:141] libmachine: STDOUT: 
	I0927 10:39:30.311189    5160 main.go:141] libmachine: STDERR: 
	I0927 10:39:30.311194    5160 main.go:141] libmachine: Waiting for VM to start (ssh -p 50492 docker@127.0.0.1)...
	I0927 10:39:36.770865    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:39:41.771015    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:39:41.771160    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:39:41.783603    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:39:41.783697    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:39:41.815133    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:39:41.815226    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:39:41.830660    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:39:41.830760    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:39:41.845327    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:39:41.845413    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:39:41.857753    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:39:41.857845    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:39:41.870023    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:39:41.870111    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:39:41.881822    5001 logs.go:276] 0 containers: []
	W0927 10:39:41.881837    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:39:41.881916    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:39:41.894131    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:39:41.894152    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:39:41.894158    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:39:41.907698    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:39:41.907709    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:39:41.925204    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:39:41.925225    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:39:41.939722    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:39:41.939737    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:39:41.944590    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:39:41.944603    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:39:41.961203    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:39:41.961217    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:39:41.974614    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:39:41.974626    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:39:41.999810    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:39:41.999826    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:39:42.015748    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:39:42.015763    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:39:42.035039    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:39:42.035058    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:39:42.056770    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:39:42.056784    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:39:42.072645    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:39:42.072659    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:39:42.111881    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:39:42.111893    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:39:42.129899    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:39:42.129911    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:39:42.142570    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:39:42.142584    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:39:42.155137    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:39:42.155148    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:39:42.193255    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:42.193364    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:42.194416    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:42.194421    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:39:42.194451    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:39:42.194456    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:42.194460    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:42.194463    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:42.194466    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:49.901445    5160 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/config.json ...
	I0927 10:39:49.902285    5160 machine.go:93] provisionDockerMachine start ...
	I0927 10:39:49.902485    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:49.902894    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:49.902910    5160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 10:39:49.987240    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 10:39:49.987272    5160 buildroot.go:166] provisioning hostname "stopped-upgrade-862000"
	I0927 10:39:49.987414    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:49.987676    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:49.987692    5160 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-862000 && echo "stopped-upgrade-862000" | sudo tee /etc/hostname
	I0927 10:39:50.061999    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-862000
	
	I0927 10:39:50.062074    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.062223    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.062234    5160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-862000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-862000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-862000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 10:39:50.127255    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 10:39:50.127269    5160 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19712-1508/.minikube CaCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19712-1508/.minikube}
	I0927 10:39:50.127280    5160 buildroot.go:174] setting up certificates
	I0927 10:39:50.127290    5160 provision.go:84] configureAuth start
	I0927 10:39:50.127296    5160 provision.go:143] copyHostCerts
	I0927 10:39:50.127363    5160 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem, removing ...
	I0927 10:39:50.127368    5160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem
	I0927 10:39:50.127476    5160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem (1679 bytes)
	I0927 10:39:50.127659    5160 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem, removing ...
	I0927 10:39:50.127662    5160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem
	I0927 10:39:50.127704    5160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem (1078 bytes)
	I0927 10:39:50.127799    5160 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem, removing ...
	I0927 10:39:50.127803    5160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem
	I0927 10:39:50.127840    5160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem (1123 bytes)
	I0927 10:39:50.127928    5160 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-862000 san=[127.0.0.1 localhost minikube stopped-upgrade-862000]
	I0927 10:39:50.241825    5160 provision.go:177] copyRemoteCerts
	I0927 10:39:50.241873    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 10:39:50.241883    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:39:50.275309    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 10:39:50.281755    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 10:39:50.288440    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0927 10:39:50.295515    5160 provision.go:87] duration metric: took 168.216709ms to configureAuth
	I0927 10:39:50.295524    5160 buildroot.go:189] setting minikube options for container-runtime
	I0927 10:39:50.295618    5160 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:39:50.295659    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.295739    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.295744    5160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 10:39:50.354153    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0927 10:39:50.354162    5160 buildroot.go:70] root file system type: tmpfs
	I0927 10:39:50.354215    5160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 10:39:50.354262    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.354364    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.354398    5160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 10:39:50.417860    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 10:39:50.417928    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.418047    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.418057    5160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 10:39:50.755109    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0927 10:39:50.755126    5160 machine.go:96] duration metric: took 852.846458ms to provisionDockerMachine
	I0927 10:39:50.755133    5160 start.go:293] postStartSetup for "stopped-upgrade-862000" (driver="qemu2")
	I0927 10:39:50.755140    5160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 10:39:50.755199    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 10:39:50.755208    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:39:50.787501    5160 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 10:39:50.788728    5160 info.go:137] Remote host: Buildroot 2021.02.12
	I0927 10:39:50.788737    5160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/addons for local assets ...
	I0927 10:39:50.788810    5160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/files for local assets ...
	I0927 10:39:50.788907    5160 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem -> 20392.pem in /etc/ssl/certs
	I0927 10:39:50.789008    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 10:39:50.791941    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:39:50.799023    5160 start.go:296] duration metric: took 43.886084ms for postStartSetup
	I0927 10:39:50.799036    5160 fix.go:56] duration metric: took 20.547150292s for fixHost
	I0927 10:39:50.799101    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.799207    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.799212    5160 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 10:39:50.856710    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458790.778619587
	
	I0927 10:39:50.856720    5160 fix.go:216] guest clock: 1727458790.778619587
	I0927 10:39:50.856724    5160 fix.go:229] Guest: 2024-09-27 10:39:50.778619587 -0700 PDT Remote: 2024-09-27 10:39:50.799038 -0700 PDT m=+20.666312543 (delta=-20.418413ms)
	I0927 10:39:50.856738    5160 fix.go:200] guest clock delta is within tolerance: -20.418413ms
	I0927 10:39:50.856751    5160 start.go:83] releasing machines lock for "stopped-upgrade-862000", held for 20.604873625s
	I0927 10:39:50.856829    5160 ssh_runner.go:195] Run: cat /version.json
	I0927 10:39:50.856843    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:39:50.857511    5160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 10:39:50.857533    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	W0927 10:39:50.888335    5160 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0927 10:39:50.888383    5160 ssh_runner.go:195] Run: systemctl --version
	I0927 10:39:50.931088    5160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 10:39:50.933021    5160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 10:39:50.933071    5160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0927 10:39:50.936639    5160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0927 10:39:50.942971    5160 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 10:39:50.942981    5160 start.go:495] detecting cgroup driver to use...
	I0927 10:39:50.943068    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:39:50.949728    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0927 10:39:50.952792    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 10:39:50.956106    5160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 10:39:50.956136    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 10:39:50.959597    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:39:50.962645    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 10:39:50.965501    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:39:50.968590    5160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 10:39:50.971544    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 10:39:50.974687    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 10:39:50.977533    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 10:39:50.980747    5160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 10:39:50.983951    5160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 10:39:50.986771    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:51.049952    5160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 10:39:51.058253    5160 start.go:495] detecting cgroup driver to use...
	I0927 10:39:51.058333    5160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 10:39:51.064646    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:39:51.069630    5160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 10:39:51.075444    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:39:51.080413    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 10:39:51.085286    5160 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 10:39:51.148221    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 10:39:51.153344    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:39:51.158567    5160 ssh_runner.go:195] Run: which cri-dockerd
	I0927 10:39:51.159891    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 10:39:51.162693    5160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0927 10:39:51.167709    5160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 10:39:51.230544    5160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 10:39:51.299081    5160 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 10:39:51.299152    5160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 10:39:51.304294    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:51.365654    5160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 10:39:52.497853    5160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.132199666s)
	I0927 10:39:52.497919    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 10:39:52.502811    5160 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0927 10:39:52.512332    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:39:52.517348    5160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 10:39:52.577459    5160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 10:39:52.637991    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:52.697227    5160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 10:39:52.703184    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:39:52.707413    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:52.777295    5160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 10:39:52.816833    5160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 10:39:52.816942    5160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 10:39:52.819272    5160 start.go:563] Will wait 60s for crictl version
	I0927 10:39:52.819335    5160 ssh_runner.go:195] Run: which crictl
	I0927 10:39:52.820630    5160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 10:39:52.835587    5160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0927 10:39:52.835667    5160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:39:52.851954    5160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:39:52.872547    5160 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0927 10:39:52.872625    5160 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0927 10:39:52.874105    5160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 10:39:52.877591    5160 kubeadm.go:883] updating cluster {Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0927 10:39:52.877635    5160 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0927 10:39:52.877685    5160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:39:52.888068    5160 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 10:39:52.888077    5160 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0927 10:39:52.888131    5160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 10:39:52.891619    5160 ssh_runner.go:195] Run: which lz4
	I0927 10:39:52.892841    5160 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 10:39:52.894204    5160 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 10:39:52.894214    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0927 10:39:53.817850    5160 docker.go:649] duration metric: took 925.076875ms to copy over tarball
	I0927 10:39:53.817915    5160 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 10:39:54.968406    5160 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.150503583s)
	I0927 10:39:54.968418    5160 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 10:39:54.984320    5160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 10:39:54.987824    5160 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0927 10:39:54.993074    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:55.053382    5160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 10:39:52.198279    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:39:56.662239    5160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.608877125s)
	I0927 10:39:56.662355    5160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:39:56.679242    5160 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 10:39:56.679253    5160 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0927 10:39:56.679258    5160 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 10:39:56.684616    5160 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:56.686731    5160 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:56.688733    5160 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:56.688988    5160 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:56.690490    5160 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:56.690494    5160 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:56.691916    5160 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:56.692068    5160 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:56.693110    5160 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:56.693131    5160 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:56.694019    5160 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0927 10:39:56.694258    5160 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:56.695337    5160 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:56.695659    5160 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:56.696690    5160 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0927 10:39:56.697360    5160 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.113954    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:57.124490    5160 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0927 10:39:57.124519    5160 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:57.124584    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:57.135200    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0927 10:39:57.137557    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:57.139523    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:57.143981    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:57.151839    5160 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0927 10:39:57.151868    5160 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:57.151936    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:57.157621    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:57.158396    5160 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0927 10:39:57.158412    5160 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:57.158448    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:57.160962    5160 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0927 10:39:57.160980    5160 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:57.161038    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:57.169371    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0927 10:39:57.178005    5160 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0927 10:39:57.178026    5160 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:57.178092    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:57.181894    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0927 10:39:57.187009    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0927 10:39:57.187560    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0927 10:39:57.199396    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0927 10:39:57.199405    5160 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0927 10:39:57.199512    5160 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0927 10:39:57.199541    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0927 10:39:57.199594    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0927 10:39:57.201066    5160 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0927 10:39:57.201352    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.222631    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0927 10:39:57.222674    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0927 10:39:57.222688    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0927 10:39:57.222702    5160 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0927 10:39:57.222721    5160 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.222745    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0927 10:39:57.222761    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.258240    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0927 10:39:57.258269    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0927 10:39:57.261746    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0927 10:39:57.261882    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0927 10:39:57.276903    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0927 10:39:57.276931    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0927 10:39:57.282496    5160 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0927 10:39:57.282584    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0927 10:39:57.340325    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0927 10:39:57.383348    5160 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0927 10:39:57.383388    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0927 10:39:57.486182    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0927 10:39:57.548920    5160 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0927 10:39:57.548936    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0927 10:39:57.562416    5160 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0927 10:39:57.562544    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:57.696301    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0927 10:39:57.696328    5160 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0927 10:39:57.696348    5160 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:57.696420    5160 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:57.710341    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 10:39:57.710479    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 10:39:57.711808    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0927 10:39:57.711822    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0927 10:39:57.744661    5160 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 10:39:57.744675    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0927 10:39:57.973868    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 10:39:57.973912    5160 cache_images.go:92] duration metric: took 1.294681167s to LoadCachedImages
	W0927 10:39:57.973964    5160 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0927 10:39:57.973971    5160 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0927 10:39:57.974028    5160 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-862000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 10:39:57.974109    5160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 10:39:57.989692    5160 cni.go:84] Creating CNI manager for ""
	I0927 10:39:57.989712    5160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:39:57.989721    5160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 10:39:57.989730    5160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-862000 NodeName:stopped-upgrade-862000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 10:39:57.989798    5160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-862000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 10:39:57.989871    5160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0927 10:39:57.992709    5160 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 10:39:57.992739    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 10:39:57.995758    5160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0927 10:39:58.000874    5160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 10:39:58.006020    5160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0927 10:39:58.011309    5160 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0927 10:39:58.012621    5160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 10:39:58.016397    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:58.077626    5160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:39:58.087189    5160 certs.go:68] Setting up /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000 for IP: 10.0.2.15
	I0927 10:39:58.087202    5160 certs.go:194] generating shared ca certs ...
	I0927 10:39:58.087212    5160 certs.go:226] acquiring lock for ca certs: {Name:mk0418f7d8f4c252d010b1c431fe702739668245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.087388    5160 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key
	I0927 10:39:58.087436    5160 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key
	I0927 10:39:58.087441    5160 certs.go:256] generating profile certs ...
	I0927 10:39:58.087543    5160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.key
	I0927 10:39:58.087561    5160 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8
	I0927 10:39:58.087575    5160 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0927 10:39:58.157681    5160 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8 ...
	I0927 10:39:58.157697    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8: {Name:mk3b014ac82695a7784b900ea0e78c3f91e3ea04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.158131    5160 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8 ...
	I0927 10:39:58.158142    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8: {Name:mk2b182db26c53a67f044097c0f6ad9062ad4010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.158308    5160 certs.go:381] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt
	I0927 10:39:58.158461    5160 certs.go:385] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key
	I0927 10:39:58.158616    5160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/proxy-client.key
	I0927 10:39:58.158754    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem (1338 bytes)
	W0927 10:39:58.158782    5160 certs.go:480] ignoring /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039_empty.pem, impossibly tiny 0 bytes
	I0927 10:39:58.158787    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 10:39:58.158817    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem (1078 bytes)
	I0927 10:39:58.158842    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem (1123 bytes)
	I0927 10:39:58.158867    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem (1679 bytes)
	I0927 10:39:58.158918    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:39:58.159292    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 10:39:58.166365    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 10:39:58.172665    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 10:39:58.180051    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 10:39:58.187645    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 10:39:58.194765    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 10:39:58.201273    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 10:39:58.208236    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 10:39:58.215568    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem --> /usr/share/ca-certificates/2039.pem (1338 bytes)
	I0927 10:39:58.222649    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /usr/share/ca-certificates/20392.pem (1708 bytes)
	I0927 10:39:58.229153    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 10:39:58.235889    5160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 10:39:58.240889    5160 ssh_runner.go:195] Run: openssl version
	I0927 10:39:58.242708    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 10:39:58.245532    5160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:39:58.246808    5160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:39:58.246831    5160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:39:58.248483    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 10:39:58.251805    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2039.pem && ln -fs /usr/share/ca-certificates/2039.pem /etc/ssl/certs/2039.pem"
	I0927 10:39:58.254785    5160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2039.pem
	I0927 10:39:58.256122    5160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:11 /usr/share/ca-certificates/2039.pem
	I0927 10:39:58.256148    5160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2039.pem
	I0927 10:39:58.257912    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2039.pem /etc/ssl/certs/51391683.0"
	I0927 10:39:58.260784    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20392.pem && ln -fs /usr/share/ca-certificates/20392.pem /etc/ssl/certs/20392.pem"
	I0927 10:39:58.264281    5160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20392.pem
	I0927 10:39:58.265612    5160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:11 /usr/share/ca-certificates/20392.pem
	I0927 10:39:58.265639    5160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20392.pem
	I0927 10:39:58.267287    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20392.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 10:39:58.270086    5160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 10:39:58.271437    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 10:39:58.273342    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 10:39:58.275215    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 10:39:58.277239    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 10:39:58.279046    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 10:39:58.280833    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 10:39:58.282588    5160 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:39:58.282664    5160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:39:58.292481    5160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 10:39:58.295951    5160 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 10:39:58.295963    5160 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 10:39:58.295995    5160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 10:39:58.301768    5160 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:39:58.302080    5160 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-862000" does not appear in /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:39:58.302185    5160 kubeconfig.go:62] /Users/jenkins/minikube-integration/19712-1508/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-862000" cluster setting kubeconfig missing "stopped-upgrade-862000" context setting]
	I0927 10:39:58.302387    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.302841    5160 kapi.go:59] client config for stopped-upgrade-862000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.key", CAFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a965d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 10:39:58.303170    5160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 10:39:58.305902    5160 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-862000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0927 10:39:58.305907    5160 kubeadm.go:1160] stopping kube-system containers ...
	I0927 10:39:58.305954    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:39:58.316356    5160 docker.go:483] Stopping containers: [da497851937b 120cb3756aba 9e8db25c44dd 35682614f5ee f305d112a88e d3e7db455b14 726712748f0b e6b2ac509287]
	I0927 10:39:58.316429    5160 ssh_runner.go:195] Run: docker stop da497851937b 120cb3756aba 9e8db25c44dd 35682614f5ee f305d112a88e d3e7db455b14 726712748f0b e6b2ac509287
	I0927 10:39:58.326937    5160 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 10:39:58.332663    5160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:39:58.335821    5160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 10:39:58.335826    5160 kubeadm.go:157] found existing configuration files:
	
	I0927 10:39:58.335848    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0927 10:39:58.338752    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 10:39:58.338777    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 10:39:58.341265    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0927 10:39:58.344037    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 10:39:58.344060    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 10:39:58.347082    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0927 10:39:58.349583    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 10:39:58.349608    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:39:58.352515    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0927 10:39:58.355680    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 10:39:58.355706    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:39:58.358465    5160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:39:58.360985    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.382306    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.654989    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.767542    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.793992    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.819520    5160 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:39:58.819612    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:39:59.321656    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:39:59.821487    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:39:59.826459    5160 api_server.go:72] duration metric: took 1.0069655s to wait for apiserver process to appear ...
	I0927 10:39:59.826469    5160 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:39:59.826487    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:39:57.198418    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:39:57.198527    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:39:57.210528    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:39:57.210618    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:39:57.221854    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:39:57.221933    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:39:57.234756    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:39:57.234840    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:39:57.252000    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:39:57.252082    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:39:57.265820    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:39:57.265900    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:39:57.279315    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:39:57.279394    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:39:57.293090    5001 logs.go:276] 0 containers: []
	W0927 10:39:57.293102    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:39:57.293177    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:39:57.305524    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:39:57.305544    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:39:57.305551    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:39:57.326647    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:39:57.326660    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:39:57.344241    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:39:57.344254    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:39:57.359238    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:39:57.359248    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:39:57.373297    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:39:57.373308    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:39:57.399035    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:39:57.399047    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:39:57.412565    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:39:57.412582    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:39:57.430123    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:39:57.430137    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:39:57.453496    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:39:57.453512    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:39:57.468795    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:39:57.468809    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:39:57.482743    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:39:57.482754    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:39:57.523938    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:57.524041    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:57.525098    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:39:57.525106    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:39:57.529943    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:39:57.529953    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:39:57.571554    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:39:57.571567    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:39:57.592126    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:39:57.592141    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:39:57.611364    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:39:57.611381    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:39:57.631618    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:57.631631    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:39:57.631661    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:39:57.631666    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:39:57.631670    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:39:57.631692    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:57.631697    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:40:04.828309    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:04.828372    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:09.828483    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:09.828530    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:07.635560    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:14.828839    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:14.828886    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:12.637651    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:12.637824    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:40:12.648456    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:40:12.648549    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:40:12.659207    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:40:12.659300    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:40:12.672672    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:40:12.672754    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:40:12.682761    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:40:12.682843    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:40:12.693473    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:40:12.693544    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:40:12.703972    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:40:12.704065    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:40:12.713973    5001 logs.go:276] 0 containers: []
	W0927 10:40:12.713983    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:40:12.714053    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:40:12.725251    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:40:12.725266    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:40:12.725271    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:40:12.737537    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:40:12.737546    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:40:12.777006    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:40:12.777017    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:40:12.791081    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:40:12.791092    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:40:12.803480    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:40:12.803490    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:40:12.818428    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:40:12.818439    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:40:12.830469    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:40:12.830479    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:40:12.855434    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:40:12.855445    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:40:12.893819    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:12.893917    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:12.894970    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:40:12.894979    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:40:12.909391    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:40:12.909406    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:40:12.928309    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:40:12.928322    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:40:12.945979    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:40:12.945994    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:40:12.957389    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:40:12.957398    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:40:12.975584    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:40:12.975597    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:40:12.980140    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:40:12.980146    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:40:12.991309    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:40:12.991320    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:40:13.003039    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:13.003051    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:40:13.003078    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:40:13.003083    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:13.003088    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:13.003091    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:13.003500    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:40:19.829248    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:19.829294    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:24.829849    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:24.829946    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:23.007415    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:29.830961    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:29.831009    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:28.008677    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:28.009025    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:40:28.035319    5001 logs.go:276] 2 containers: [18389f4fe356 8037932109d8]
	I0927 10:40:28.035468    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:40:28.054090    5001 logs.go:276] 2 containers: [3793a6b5fa9c 22672d08dd78]
	I0927 10:40:28.054191    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:40:28.067991    5001 logs.go:276] 1 containers: [21b498f783df]
	I0927 10:40:28.068081    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:40:28.080951    5001 logs.go:276] 2 containers: [db9bf396966a 58fc035fe7ca]
	I0927 10:40:28.081032    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:40:28.091728    5001 logs.go:276] 1 containers: [51e441c98b5f]
	I0927 10:40:28.091806    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:40:28.101964    5001 logs.go:276] 2 containers: [c6a4e92b448d a4bbda14b841]
	I0927 10:40:28.102033    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:40:28.112203    5001 logs.go:276] 0 containers: []
	W0927 10:40:28.112214    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:40:28.112275    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:40:28.122599    5001 logs.go:276] 1 containers: [ab2fa2bdeb1a]
	I0927 10:40:28.122614    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:40:28.122620    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:40:28.159420    5001 logs.go:123] Gathering logs for kube-apiserver [8037932109d8] ...
	I0927 10:40:28.159437    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8037932109d8"
	I0927 10:40:28.178707    5001 logs.go:123] Gathering logs for kube-proxy [51e441c98b5f] ...
	I0927 10:40:28.178717    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e441c98b5f"
	I0927 10:40:28.191032    5001 logs.go:123] Gathering logs for storage-provisioner [ab2fa2bdeb1a] ...
	I0927 10:40:28.191043    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab2fa2bdeb1a"
	I0927 10:40:28.203269    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:40:28.203280    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:40:28.240235    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:28.240339    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:28.241357    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:40:28.241363    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:40:28.245857    5001 logs.go:123] Gathering logs for etcd [22672d08dd78] ...
	I0927 10:40:28.245868    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22672d08dd78"
	I0927 10:40:28.263604    5001 logs.go:123] Gathering logs for etcd [3793a6b5fa9c] ...
	I0927 10:40:28.263615    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3793a6b5fa9c"
	I0927 10:40:28.279456    5001 logs.go:123] Gathering logs for coredns [21b498f783df] ...
	I0927 10:40:28.279464    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b498f783df"
	I0927 10:40:28.290580    5001 logs.go:123] Gathering logs for kube-scheduler [db9bf396966a] ...
	I0927 10:40:28.290590    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db9bf396966a"
	I0927 10:40:28.302877    5001 logs.go:123] Gathering logs for kube-scheduler [58fc035fe7ca] ...
	I0927 10:40:28.302892    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fc035fe7ca"
	I0927 10:40:28.318034    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:40:28.318050    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:40:28.341240    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:40:28.341248    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:40:28.354306    5001 logs.go:123] Gathering logs for kube-apiserver [18389f4fe356] ...
	I0927 10:40:28.354317    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18389f4fe356"
	I0927 10:40:28.369636    5001 logs.go:123] Gathering logs for kube-controller-manager [c6a4e92b448d] ...
	I0927 10:40:28.369651    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6a4e92b448d"
	I0927 10:40:28.387698    5001 logs.go:123] Gathering logs for kube-controller-manager [a4bbda14b841] ...
	I0927 10:40:28.387709    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bbda14b841"
	I0927 10:40:28.398984    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:28.398997    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:40:28.399024    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:40:28.399028    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:40:28.399032    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:40:28.399036    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:40:28.399039    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:40:34.832497    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:34.832548    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:39.834164    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:39.834210    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:38.402972    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:43.403790    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:43.403858    5001 kubeadm.go:597] duration metric: took 4m6.773253959s to restartPrimaryControlPlane
	W0927 10:40:43.403924    5001 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 10:40:43.403953    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0927 10:40:44.339694    5001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 10:40:44.344685    5001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:40:44.347354    5001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:40:44.350159    5001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 10:40:44.350166    5001 kubeadm.go:157] found existing configuration files:
	
	I0927 10:40:44.350197    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/admin.conf
	I0927 10:40:44.352758    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 10:40:44.352787    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 10:40:44.355219    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/kubelet.conf
	I0927 10:40:44.358279    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 10:40:44.358306    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 10:40:44.361424    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/controller-manager.conf
	I0927 10:40:44.363773    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 10:40:44.363799    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:40:44.366558    5001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/scheduler.conf
	I0927 10:40:44.369798    5001 kubeadm.go:163] "https://control-plane.minikube.internal:50287" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50287 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 10:40:44.369828    5001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:40:44.372728    5001 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 10:40:44.390846    5001 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0927 10:40:44.390952    5001 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 10:40:44.438354    5001 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 10:40:44.438489    5001 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 10:40:44.438544    5001 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 10:40:44.492585    5001 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 10:40:44.836124    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:44.836141    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:44.496735    5001 out.go:235]   - Generating certificates and keys ...
	I0927 10:40:44.496770    5001 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 10:40:44.496810    5001 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 10:40:44.496856    5001 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 10:40:44.496900    5001 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 10:40:44.496936    5001 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 10:40:44.496967    5001 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 10:40:44.497003    5001 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 10:40:44.497041    5001 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 10:40:44.497079    5001 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 10:40:44.497117    5001 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 10:40:44.497136    5001 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 10:40:44.497161    5001 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 10:40:44.725094    5001 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 10:40:44.786901    5001 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 10:40:44.961939    5001 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 10:40:45.180112    5001 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 10:40:45.208490    5001 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 10:40:45.209714    5001 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 10:40:45.209746    5001 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 10:40:45.283048    5001 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 10:40:45.287163    5001 out.go:235]   - Booting up control plane ...
	I0927 10:40:45.287212    5001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 10:40:45.287255    5001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 10:40:45.287317    5001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 10:40:45.287406    5001 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 10:40:45.287511    5001 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 10:40:49.838168    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:49.838190    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:49.788138    5001 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504298 seconds
	I0927 10:40:49.788246    5001 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 10:40:49.793947    5001 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 10:40:50.320643    5001 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 10:40:50.320989    5001 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-198000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 10:40:50.828655    5001 kubeadm.go:310] [bootstrap-token] Using token: jrf2xd.lubh65ru8b16tcp9
	I0927 10:40:50.835106    5001 out.go:235]   - Configuring RBAC rules ...
	I0927 10:40:50.835211    5001 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 10:40:50.835289    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 10:40:50.838485    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 10:40:50.839814    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 10:40:50.841044    5001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 10:40:50.842611    5001 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 10:40:50.847438    5001 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 10:40:51.029881    5001 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 10:40:51.233472    5001 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 10:40:51.234007    5001 kubeadm.go:310] 
	I0927 10:40:51.234047    5001 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 10:40:51.234051    5001 kubeadm.go:310] 
	I0927 10:40:51.234094    5001 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 10:40:51.234099    5001 kubeadm.go:310] 
	I0927 10:40:51.234113    5001 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 10:40:51.234156    5001 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 10:40:51.234185    5001 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 10:40:51.234189    5001 kubeadm.go:310] 
	I0927 10:40:51.234219    5001 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 10:40:51.234226    5001 kubeadm.go:310] 
	I0927 10:40:51.234274    5001 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 10:40:51.234279    5001 kubeadm.go:310] 
	I0927 10:40:51.234325    5001 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 10:40:51.234381    5001 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 10:40:51.234431    5001 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 10:40:51.234436    5001 kubeadm.go:310] 
	I0927 10:40:51.234492    5001 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 10:40:51.234539    5001 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 10:40:51.234541    5001 kubeadm.go:310] 
	I0927 10:40:51.234591    5001 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jrf2xd.lubh65ru8b16tcp9 \
	I0927 10:40:51.234645    5001 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 \
	I0927 10:40:51.234659    5001 kubeadm.go:310] 	--control-plane 
	I0927 10:40:51.234663    5001 kubeadm.go:310] 
	I0927 10:40:51.234705    5001 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 10:40:51.234710    5001 kubeadm.go:310] 
	I0927 10:40:51.234759    5001 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jrf2xd.lubh65ru8b16tcp9 \
	I0927 10:40:51.234826    5001 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 
	I0927 10:40:51.234899    5001 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 10:40:51.234912    5001 cni.go:84] Creating CNI manager for ""
	I0927 10:40:51.234922    5001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:40:51.238865    5001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 10:40:51.244816    5001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 10:40:51.247924    5001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 10:40:51.254912    5001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 10:40:51.254968    5001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 10:40:51.254986    5001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-198000 minikube.k8s.io/updated_at=2024_09_27T10_40_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=running-upgrade-198000 minikube.k8s.io/primary=true
	I0927 10:40:51.298093    5001 ops.go:34] apiserver oom_adj: -16
	I0927 10:40:51.298131    5001 kubeadm.go:1113] duration metric: took 43.216625ms to wait for elevateKubeSystemPrivileges
	I0927 10:40:51.298139    5001 kubeadm.go:394] duration metric: took 4m14.68234525s to StartCluster
	I0927 10:40:51.298150    5001 settings.go:142] acquiring lock: {Name:mk58fc55a93399a03fb1c9ac710554db41068524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:40:51.298241    5001 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:40:51.298649    5001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:40:51.298879    5001 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:40:51.298884    5001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 10:40:51.298922    5001 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-198000"
	I0927 10:40:51.298937    5001 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-198000"
	W0927 10:40:51.298942    5001 addons.go:243] addon storage-provisioner should already be in state true
	I0927 10:40:51.298952    5001 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-198000"
	I0927 10:40:51.298961    5001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-198000"
	I0927 10:40:51.298952    5001 host.go:66] Checking if "running-upgrade-198000" exists ...
	I0927 10:40:51.298991    5001 config.go:182] Loaded profile config "running-upgrade-198000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:40:51.299798    5001 kapi.go:59] client config for running-upgrade-198000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/running-upgrade-198000/client.key", CAFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1028f65d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 10:40:51.299920    5001 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-198000"
	W0927 10:40:51.299924    5001 addons.go:243] addon default-storageclass should already be in state true
	I0927 10:40:51.299931    5001 host.go:66] Checking if "running-upgrade-198000" exists ...
	I0927 10:40:51.302813    5001 out.go:177] * Verifying Kubernetes components...
	I0927 10:40:51.303153    5001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 10:40:51.305951    5001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 10:40:51.305959    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	I0927 10:40:51.308842    5001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:40:54.840448    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:54.840572    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:51.311868    5001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:40:51.317845    5001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:40:51.317852    5001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 10:40:51.317859    5001 sshutil.go:53] new ssh client: &{IP:localhost Port:50255 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/running-upgrade-198000/id_rsa Username:docker}
	I0927 10:40:51.385406    5001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:40:51.391157    5001 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:40:51.391208    5001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:40:51.395290    5001 api_server.go:72] duration metric: took 96.402459ms to wait for apiserver process to appear ...
	I0927 10:40:51.395297    5001 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:40:51.395305    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:51.426185    5001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 10:40:51.450574    5001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:40:51.751205    5001 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 10:40:51.751218    5001 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 10:40:59.843237    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:59.843733    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:40:59.890393    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:40:59.890540    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:40:59.908375    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:40:59.908484    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:40:59.921846    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:40:59.921926    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:40:59.933520    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:40:59.933606    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:40:59.944494    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:40:59.944576    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:40:59.955568    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:40:59.955661    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:40:59.966375    5160 logs.go:276] 0 containers: []
	W0927 10:40:59.966386    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:40:59.966461    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:40:59.976413    5160 logs.go:276] 0 containers: []
	W0927 10:40:59.976426    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:40:59.976437    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:40:59.976443    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:00.014099    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:00.014111    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:00.027602    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:00.027611    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:00.054127    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:00.054137    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:00.069459    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:00.069471    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:00.080930    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:00.080943    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:00.095173    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:00.095182    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:00.106440    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:00.106450    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:00.118883    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:00.118893    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:00.131983    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:00.131997    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:40:56.397389    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:56.397504    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:00.209852    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:00.209866    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:00.223557    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:00.223570    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:00.238688    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:00.238696    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:00.243496    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:00.243505    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:00.260780    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:00.260791    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:02.786289    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:01.398318    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:01.398343    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:07.788501    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:07.788740    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:07.806155    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:07.806265    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:07.819542    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:07.819631    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:07.830428    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:07.830513    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:07.842110    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:07.842188    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:07.852689    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:07.852768    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:07.864019    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:07.864091    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:07.874039    5160 logs.go:276] 0 containers: []
	W0927 10:41:07.874051    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:07.874120    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:07.884563    5160 logs.go:276] 0 containers: []
	W0927 10:41:07.884576    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:07.884585    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:07.884591    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:07.898875    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:07.898886    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:07.912712    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:07.912721    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:07.926946    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:07.926959    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:07.939299    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:07.939308    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:07.964703    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:07.964713    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:08.003463    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:08.003473    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:08.037548    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:08.037559    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:08.052312    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:08.052320    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:08.064241    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:08.064253    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:08.075789    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:08.075799    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:08.089782    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:08.089790    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:08.128288    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:08.128298    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:08.140343    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:08.140360    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:08.159358    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:08.159373    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:06.398984    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:06.399005    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:10.665989    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:11.399573    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:11.399624    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:15.667857    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:15.668124    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:15.684845    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:15.684957    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:15.698406    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:15.698500    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:15.709739    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:15.709816    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:15.720668    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:15.720752    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:15.734391    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:15.734479    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:15.745216    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:15.745299    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:15.755499    5160 logs.go:276] 0 containers: []
	W0927 10:41:15.755511    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:15.755584    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:15.766055    5160 logs.go:276] 0 containers: []
	W0927 10:41:15.766065    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:15.766072    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:15.766079    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:15.804976    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:15.804988    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:15.809697    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:15.809706    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:15.823429    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:15.823441    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:15.837785    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:15.837801    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:15.852778    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:15.852787    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:15.869945    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:15.869956    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:15.881392    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:15.881404    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:15.919585    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:15.919596    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:15.944812    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:15.944823    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:15.956790    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:15.956800    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:15.980870    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:15.980878    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:15.995368    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:15.995376    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:16.006976    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:16.006987    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:16.021902    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:16.021913    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:18.536605    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:16.400437    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:16.400477    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:21.401545    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:21.401588    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0927 10:41:21.752697    5001 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0927 10:41:21.756830    5001 out.go:177] * Enabled addons: storage-provisioner
	I0927 10:41:23.538761    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:23.538969    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:23.552497    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:23.552595    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:23.563800    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:23.563885    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:23.574028    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:23.574116    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:23.584307    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:23.584385    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:23.594925    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:23.595007    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:23.605319    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:23.605394    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:23.615808    5160 logs.go:276] 0 containers: []
	W0927 10:41:23.615819    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:23.615891    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:23.626221    5160 logs.go:276] 0 containers: []
	W0927 10:41:23.626233    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:23.626242    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:23.626248    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:23.639881    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:23.639890    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:23.651499    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:23.651508    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:23.663524    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:23.663535    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:23.700543    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:23.700558    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:23.714962    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:23.714971    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:23.728674    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:23.728683    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:23.752542    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:23.752553    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:23.756887    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:23.756894    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:23.768399    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:23.768414    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:23.786513    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:23.786527    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:23.800045    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:23.800058    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:23.817104    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:23.817113    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:23.851954    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:23.851967    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:23.877336    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:23.877347    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:21.765596    5001 addons.go:510] duration metric: took 30.467502958s for enable addons: enabled=[storage-provisioner]
	I0927 10:41:26.393919    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:26.402968    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:26.403001    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:31.395321    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:31.395513    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:31.406830    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:31.406904    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:31.417502    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:31.417574    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:31.427950    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:31.428033    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:31.438829    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:31.438916    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:31.449943    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:31.450034    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:31.466910    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:31.466987    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:31.478724    5160 logs.go:276] 0 containers: []
	W0927 10:41:31.478737    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:31.478806    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:31.493611    5160 logs.go:276] 0 containers: []
	W0927 10:41:31.493622    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:31.493630    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:31.493636    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:31.497906    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:31.497914    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:31.509522    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:31.509532    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:31.534659    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:31.534667    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:31.546811    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:31.546820    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:31.585909    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:31.585917    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:31.610111    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:31.610123    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:31.624304    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:31.624315    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:31.639205    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:31.639218    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:31.673015    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:31.673031    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:31.688266    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:31.688276    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:31.701707    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:31.701717    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:31.713353    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:31.713363    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:31.731047    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:31.731057    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:31.743757    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:31.743768    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:34.258938    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:31.404717    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:31.404741    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:39.261190    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:39.261477    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:39.282367    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:39.282487    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:39.296815    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:39.296904    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:39.308764    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:39.308848    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:39.319423    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:39.319514    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:39.330042    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:39.330121    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:39.340600    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:39.340682    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:39.350689    5160 logs.go:276] 0 containers: []
	W0927 10:41:39.350701    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:39.350771    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:39.362260    5160 logs.go:276] 0 containers: []
	W0927 10:41:39.362272    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:39.362279    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:39.362285    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:39.366950    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:39.366958    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:39.390438    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:39.390454    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:39.408374    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:39.408387    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:39.422770    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:39.422785    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:39.447426    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:39.447438    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:39.465535    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:39.465550    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:39.481188    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:39.481200    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:39.492930    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:39.492942    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:39.511023    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:39.511034    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:39.550232    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:39.550246    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:39.568009    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:39.568023    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:39.585965    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:39.585978    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:39.598104    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:39.598117    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:39.632822    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:39.632835    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:36.406785    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:36.406816    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:42.149671    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:41.408869    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:41.408922    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:47.152237    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:47.152688    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:47.191355    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:47.191549    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:47.211302    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:47.211419    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:47.229117    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:47.229209    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:47.241154    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:47.241239    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:47.251964    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:47.252044    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:47.262907    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:47.262995    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:47.274218    5160 logs.go:276] 0 containers: []
	W0927 10:41:47.274228    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:47.274298    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:47.285292    5160 logs.go:276] 0 containers: []
	W0927 10:41:47.285304    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:47.285312    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:47.285317    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:47.297070    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:47.297080    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:47.314625    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:47.314634    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:47.335489    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:47.335499    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:47.375364    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:47.375371    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:47.400545    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:47.400557    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:47.415826    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:47.415842    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:47.430111    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:47.430120    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:47.448723    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:47.448736    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:47.452718    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:47.452724    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:47.475456    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:47.475469    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:47.486946    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:47.486957    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:47.501379    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:47.501390    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:47.528242    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:47.528252    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:47.584693    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:47.584704    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:50.098974    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:46.411092    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:46.411132    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:55.101631    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:55.102051    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:55.132680    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:55.132836    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:55.151304    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:55.151407    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:51.413322    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:51.413583    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:51.441548    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:41:51.441679    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:51.459651    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:41:51.459771    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:51.491522    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:41:51.491609    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:51.508125    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:41:51.508212    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:51.519628    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:41:51.519721    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:51.530048    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:41:51.530120    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:51.540035    5001 logs.go:276] 0 containers: []
	W0927 10:41:51.540046    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:51.540121    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:51.551397    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:41:51.551412    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:41:51.551417    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:41:51.566279    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:41:51.566289    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:41:51.578353    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:41:51.578369    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:41:51.590135    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:41:51.590145    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:41:51.604498    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:41:51.604511    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:41:51.621751    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:41:51.621761    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:41:51.633200    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:51.633210    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:41:51.651769    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:41:51.651867    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:41:51.668497    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:41:51.668504    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:41:51.682937    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:51.682947    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:51.707491    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:41:51.707498    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:41:51.718995    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:41:51.719009    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:51.730090    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:51.730102    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:51.734755    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:51.734762    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:51.773031    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:41:51.773044    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:41:51.773072    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:41:51.773076    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:41:51.773080    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:41:51.773085    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:41:51.773088    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:41:55.166505    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:55.166592    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:55.178382    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:55.178464    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:55.188967    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:55.189034    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:55.199609    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:55.199676    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:55.209957    5160 logs.go:276] 0 containers: []
	W0927 10:41:55.209968    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:55.210025    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:55.220180    5160 logs.go:276] 0 containers: []
	W0927 10:41:55.220191    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:55.220198    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:55.220204    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:55.259177    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:55.259189    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:55.283559    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:55.283567    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:55.287649    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:55.287655    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:55.322177    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:55.322188    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:55.336505    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:55.336515    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:55.351788    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:55.351797    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:55.366205    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:55.366215    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:55.380103    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:55.380112    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:55.394029    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:55.394040    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:55.405366    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:55.405376    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:55.422330    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:55.422340    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:55.446905    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:55.446915    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:55.459137    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:55.459147    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:55.472995    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:55.473004    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:57.988316    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:02.990857    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:02.991096    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:03.007709    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:03.007812    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:03.021191    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:03.021268    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:03.031970    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:03.032058    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:03.042507    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:03.042590    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:03.053167    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:03.053263    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:03.063875    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:03.063957    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:03.092748    5160 logs.go:276] 0 containers: []
	W0927 10:42:03.092762    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:03.092832    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:03.110496    5160 logs.go:276] 0 containers: []
	W0927 10:42:03.110508    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:03.110518    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:03.110524    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:03.148951    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:03.148962    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:03.163151    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:03.163164    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:03.177998    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:03.178012    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:03.189823    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:03.189836    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:03.206450    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:03.206463    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:03.218390    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:03.218403    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:03.223077    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:03.223083    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:03.238523    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:03.238536    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:03.250394    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:03.250405    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:03.275217    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:03.275232    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:03.319125    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:03.319135    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:03.334226    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:03.334236    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:03.358796    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:03.358806    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:03.370349    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:03.370364    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:01.776979    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:05.893059    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:06.777982    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:06.778173    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:06.794530    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:06.794629    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:06.808384    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:06.808475    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:06.819469    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:06.819551    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:06.829757    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:06.829838    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:06.839874    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:06.839949    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:06.850581    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:06.850660    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:06.861110    5001 logs.go:276] 0 containers: []
	W0927 10:42:06.861121    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:06.861185    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:06.871185    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:06.871202    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:06.871207    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:06.883182    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:06.883197    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:06.905933    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:06.905948    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:06.917528    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:06.917542    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:06.942672    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:06.942679    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:06.947401    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:06.947410    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:06.961433    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:06.961444    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:06.974750    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:06.974759    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:06.986077    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:06.986087    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:06.997783    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:06.997792    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:07.014537    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:07.014634    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:07.031357    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:07.031362    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:07.069520    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:07.069531    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:07.081067    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:07.081078    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:07.096666    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:07.096683    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:07.096711    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:42:07.096717    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:07.096720    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:07.096728    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:07.096731    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:10.895266    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:10.895449    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:10.908899    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:10.908995    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:10.920194    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:10.920280    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:10.930301    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:10.930396    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:10.941091    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:10.941174    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:10.951400    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:10.951479    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:10.962734    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:10.962808    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:10.973058    5160 logs.go:276] 0 containers: []
	W0927 10:42:10.973069    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:10.973141    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:10.984591    5160 logs.go:276] 0 containers: []
	W0927 10:42:10.984603    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:10.984611    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:10.984617    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:10.998826    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:10.998836    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:11.022134    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:11.022143    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:11.026276    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:11.026285    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:11.040137    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:11.040147    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:11.053837    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:11.053847    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:11.068632    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:11.068642    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:11.086345    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:11.086355    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:11.098645    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:11.098656    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:11.139015    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:11.139026    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:11.153050    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:11.153060    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:11.171787    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:11.171799    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:11.183427    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:11.183437    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:11.196499    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:11.196508    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:11.230860    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:11.230872    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:13.758496    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:18.760989    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:18.761133    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:18.773627    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:18.773718    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:18.784264    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:18.784350    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:18.801936    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:18.802022    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:18.812841    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:18.812930    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:18.823474    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:18.823557    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:18.834053    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:18.834138    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:18.844080    5160 logs.go:276] 0 containers: []
	W0927 10:42:18.844094    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:18.844162    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:18.856617    5160 logs.go:276] 0 containers: []
	W0927 10:42:18.856633    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:18.856642    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:18.856649    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:18.868905    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:18.868915    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:18.886403    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:18.886413    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:18.898341    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:18.898357    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:18.935239    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:18.935246    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:18.952917    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:18.952926    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:18.969518    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:18.969530    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:18.981454    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:18.981464    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:18.985391    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:18.985397    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:18.999430    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:18.999441    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:19.011719    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:19.011730    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:19.032602    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:19.032612    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:19.057834    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:19.057846    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:19.072968    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:19.072978    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:19.095642    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:19.095651    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:17.099802    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:21.632934    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:22.101356    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:22.101575    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:22.124892    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:22.124993    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:22.137947    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:22.138027    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:22.149462    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:22.149552    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:22.160021    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:22.160092    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:22.170332    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:22.170408    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:22.180977    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:22.181065    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:22.190960    5001 logs.go:276] 0 containers: []
	W0927 10:42:22.190974    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:22.191044    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:22.202242    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:22.202260    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:22.202266    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:22.213837    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:22.213850    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:22.225571    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:22.225581    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:22.230215    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:22.230224    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:22.273023    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:22.273037    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:22.287337    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:22.287347    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:22.303950    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:22.303962    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:22.315629    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:22.315642    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:22.333600    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:22.333609    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:22.357241    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:22.357249    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:22.368599    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:22.368610    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:22.385646    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:22.385744    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:22.402374    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:22.402380    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:22.416547    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:22.416557    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:22.427666    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:22.427677    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:22.427704    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:42:22.427709    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:22.427712    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:22.427717    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:22.427719    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:26.635226    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:26.635456    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:26.653016    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:26.653118    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:26.666126    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:26.666219    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:26.677798    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:26.677881    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:26.688167    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:26.688258    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:26.698531    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:26.698615    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:26.709176    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:26.709258    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:26.719237    5160 logs.go:276] 0 containers: []
	W0927 10:42:26.719249    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:26.719317    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:26.731288    5160 logs.go:276] 0 containers: []
	W0927 10:42:26.731302    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:26.731311    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:26.731316    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:26.745804    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:26.745815    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:26.760585    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:26.760596    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:26.775661    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:26.775672    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:26.814996    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:26.815004    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:26.849483    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:26.849494    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:26.861337    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:26.861351    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:26.865367    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:26.865375    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:26.879151    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:26.879162    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:26.896104    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:26.896115    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:26.920258    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:26.920265    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:26.944040    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:26.944051    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:26.961561    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:26.961570    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:26.974435    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:26.974445    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:26.987069    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:26.987079    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:29.501312    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:34.503360    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:34.503654    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:34.529504    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:34.529642    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:34.545734    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:34.545832    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:34.558764    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:34.558852    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:34.570313    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:34.570396    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:34.580862    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:34.580939    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:34.591559    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:34.591633    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:34.601890    5160 logs.go:276] 0 containers: []
	W0927 10:42:34.601902    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:34.601966    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:34.612320    5160 logs.go:276] 0 containers: []
	W0927 10:42:34.612333    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:34.612341    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:34.612346    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:34.635183    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:34.635190    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:34.671445    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:34.671456    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:34.685648    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:34.685658    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:34.703679    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:34.703689    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:34.717870    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:34.717881    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:34.736349    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:34.736360    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:34.775508    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:34.775519    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:34.780493    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:34.780505    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:34.793181    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:34.793192    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:34.808132    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:34.808144    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:34.823190    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:34.823206    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:34.834891    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:34.834904    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:34.846692    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:34.846707    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:34.872420    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:34.872430    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:32.431655    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:37.388117    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:37.434260    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:37.434723    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:37.477761    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:37.477913    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:37.496772    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:37.496875    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:37.511347    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:37.511427    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:37.523361    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:37.523432    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:37.534086    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:37.534159    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:37.545300    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:37.545387    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:37.555501    5001 logs.go:276] 0 containers: []
	W0927 10:42:37.555513    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:37.555583    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:37.566523    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:37.566541    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:37.566547    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:37.589938    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:37.589947    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:37.602500    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:37.602510    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:37.619312    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:37.619411    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:37.635758    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:37.635765    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:37.672634    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:37.672645    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:37.690095    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:37.690106    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:37.701994    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:37.702006    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:37.717206    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:37.717216    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:37.729080    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:37.729091    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:37.733480    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:37.733486    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:37.748578    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:37.748588    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:37.760152    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:37.760164    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:37.772091    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:37.772101    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:37.795321    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:37.795330    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:37.795354    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:42:37.795359    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:37.795362    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:37.795366    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:37.795383    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:42.390733    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:42.391032    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:42.416125    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:42.416265    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:42.433192    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:42.433299    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:42.446836    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:42.446925    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:42.458255    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:42.458333    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:42.470601    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:42.470685    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:42.481622    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:42.481702    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:42.496605    5160 logs.go:276] 0 containers: []
	W0927 10:42:42.496617    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:42.496692    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:42.507301    5160 logs.go:276] 0 containers: []
	W0927 10:42:42.507312    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:42.507319    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:42.507324    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:42.520954    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:42.520965    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:42.535272    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:42.535283    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:42.547754    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:42.547764    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:42.562579    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:42.562595    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:42.580944    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:42.580955    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:42.592158    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:42.592174    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:42.617720    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:42.617731    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:42.631570    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:42.631584    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:42.645016    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:42.645026    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:42.657545    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:42.657555    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:42.682090    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:42.682100    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:42.720247    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:42.720255    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:42.724332    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:42.724340    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:42.759285    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:42.759296    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:45.278243    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:47.799337    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:50.278938    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:50.279315    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:50.309211    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:50.309371    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:50.328059    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:50.328154    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:50.341806    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:50.341898    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:50.357210    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:50.357310    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:50.367589    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:50.367666    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:50.378376    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:50.378464    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:50.388393    5160 logs.go:276] 0 containers: []
	W0927 10:42:50.388405    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:50.388477    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:50.398883    5160 logs.go:276] 0 containers: []
	W0927 10:42:50.398899    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:50.398906    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:50.398912    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:50.403598    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:50.403604    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:50.415186    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:50.415196    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:50.427951    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:50.427962    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:50.442223    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:50.442234    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:50.456901    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:50.456911    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:50.468086    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:50.468096    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:50.485171    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:50.485182    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:50.510315    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:50.510326    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:50.524668    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:50.524678    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:50.542655    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:50.542666    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:50.565329    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:50.565336    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:50.602554    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:50.602564    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:50.644210    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:50.644223    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:50.659173    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:50.659183    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:53.174171    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:52.802019    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:52.802352    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:52.828585    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:42:52.828745    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:52.846487    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:42:52.846596    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:52.860125    5001 logs.go:276] 2 containers: [11c6047c72b7 71bf4fcc074d]
	I0927 10:42:52.860213    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:52.871413    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:42:52.871496    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:52.882449    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:42:52.882528    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:52.893030    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:42:52.893108    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:52.905714    5001 logs.go:276] 0 containers: []
	W0927 10:42:52.905726    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:52.905802    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:52.916269    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:42:52.916285    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:52.916291    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:52.951794    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:42:52.951809    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:42:52.966104    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:42:52.966118    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:42:52.984255    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:42:52.984269    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:42:52.996182    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:42:52.996196    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:42:53.013691    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:42:53.013706    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:42:53.029458    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:53.029473    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:53.034223    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:42:53.034231    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:42:53.048278    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:42:53.048292    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:42:53.059671    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:42:53.059685    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:42:53.077150    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:53.077163    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:53.100174    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:42:53.100181    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:53.114387    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:53.114399    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:42:53.131523    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:53.131621    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:53.148485    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:53.148491    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:42:53.148518    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:42:53.148523    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:42:53.148525    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:42:53.148529    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:42:53.148532    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:42:58.176287    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:58.176486    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:58.190555    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:58.190639    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:58.202154    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:58.202232    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:58.213117    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:58.213206    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:58.226821    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:58.226904    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:58.241396    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:58.241474    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:58.254718    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:58.254806    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:58.268867    5160 logs.go:276] 0 containers: []
	W0927 10:42:58.268881    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:58.268952    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:58.279278    5160 logs.go:276] 0 containers: []
	W0927 10:42:58.279290    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:58.279299    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:58.279305    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:58.293774    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:58.293787    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:58.305515    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:58.305525    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:58.310100    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:58.310109    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:58.351900    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:58.351912    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:58.366570    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:58.366583    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:58.383952    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:58.383962    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:58.399599    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:58.399614    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:58.424384    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:58.424394    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:58.440300    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:58.440315    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:58.463241    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:58.463249    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:58.475281    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:58.475293    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:58.512763    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:58.512772    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:58.527596    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:58.527609    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:58.538941    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:58.538951    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:01.053188    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:03.152471    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:06.055457    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:06.055589    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:06.069521    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:06.069616    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:06.080690    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:06.080773    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:06.091179    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:06.091263    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:06.102172    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:06.102248    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:06.112692    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:06.112780    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:06.123662    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:06.123744    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:06.133867    5160 logs.go:276] 0 containers: []
	W0927 10:43:06.133879    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:06.133947    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:06.143733    5160 logs.go:276] 0 containers: []
	W0927 10:43:06.143742    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:06.143750    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:06.143755    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:06.157686    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:06.157696    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:06.169865    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:06.169876    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:06.189796    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:06.189806    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:06.213963    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:06.213970    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:06.252418    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:06.252426    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:06.287354    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:06.287365    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:06.302624    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:06.302634    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:06.317139    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:06.317150    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:06.329218    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:06.329227    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:06.333645    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:06.333653    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:06.347740    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:06.347750    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:06.373669    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:06.373678    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:06.385036    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:06.385046    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:06.399578    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:06.399589    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:08.912988    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:08.154671    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:08.155189    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:08.195741    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:08.195877    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:08.212859    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:08.212957    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:08.226488    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:08.226581    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:08.237863    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:08.237939    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:08.247966    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:08.248055    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:08.258231    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:08.258307    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:08.268712    5001 logs.go:276] 0 containers: []
	W0927 10:43:08.268723    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:08.268793    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:08.279358    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:08.279375    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:08.279380    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:08.294495    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:08.294504    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:08.306490    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:08.306502    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:08.319920    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:08.319933    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:08.336474    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:08.336571    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:08.353529    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:08.353535    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:08.370640    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:08.370652    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:08.382655    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:08.382666    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:08.394449    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:08.394464    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:08.408985    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:08.408994    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:08.420403    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:08.420416    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:08.432194    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:08.432203    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:08.436596    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:08.436604    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:08.470633    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:08.470644    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:08.495602    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:08.495611    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:08.506322    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:08.506332    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:08.530731    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:08.530744    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:08.530767    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:43:08.530772    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:08.530775    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:08.530779    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:08.530782    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:13.915098    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:13.915303    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:13.929694    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:13.929793    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:13.942027    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:13.942110    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:13.952340    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:13.952426    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:13.962750    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:13.962835    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:13.973386    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:13.973458    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:13.983968    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:13.984035    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:13.993854    5160 logs.go:276] 0 containers: []
	W0927 10:43:13.993863    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:13.993921    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:14.003754    5160 logs.go:276] 0 containers: []
	W0927 10:43:14.003765    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:14.003773    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:14.003779    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:14.042920    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:14.042928    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:14.057505    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:14.057515    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:14.070983    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:14.070994    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:14.088602    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:14.088611    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:14.106099    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:14.106111    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:14.119914    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:14.119924    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:14.138731    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:14.138741    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:14.155312    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:14.155322    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:14.168262    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:14.168278    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:14.172302    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:14.172310    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:14.206745    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:14.206760    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:14.231535    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:14.231545    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:14.243854    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:14.243864    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:14.265900    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:14.265907    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:16.779614    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:18.532973    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:21.780616    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:21.780865    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:21.804640    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:21.804765    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:21.820420    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:21.820518    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:21.833014    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:21.833098    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:21.844095    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:21.844179    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:21.854858    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:21.854942    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:21.865950    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:21.866027    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:21.876659    5160 logs.go:276] 0 containers: []
	W0927 10:43:21.876672    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:21.876734    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:21.886441    5160 logs.go:276] 0 containers: []
	W0927 10:43:21.886454    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:21.886461    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:21.886467    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:21.908956    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:21.908963    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:21.913045    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:21.913052    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:21.938078    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:21.938088    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:21.954819    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:21.954830    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:21.971930    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:21.971940    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:22.007690    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:22.007699    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:22.018963    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:22.018973    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:22.030651    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:22.030661    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:22.069848    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:22.069858    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:22.087082    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:22.087095    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:22.099698    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:22.099708    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:22.117149    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:22.117167    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:22.145215    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:22.145232    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:22.168050    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:22.168061    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:24.681509    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:23.535213    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:23.535450    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:23.552544    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:23.552646    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:23.565464    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:23.565553    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:23.577002    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:23.577086    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:23.587669    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:23.587756    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:23.597845    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:23.597928    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:23.608360    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:23.608438    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:23.618390    5001 logs.go:276] 0 containers: []
	W0927 10:43:23.618401    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:23.618471    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:23.628957    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:23.628974    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:23.628979    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:23.645486    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:23.645583    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:23.662177    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:23.662185    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:23.679988    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:23.680002    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:23.691926    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:23.691938    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:23.704637    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:23.704649    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:23.720280    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:23.720294    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:23.737936    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:23.737949    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:23.753319    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:23.753328    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:23.758233    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:23.758240    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:23.772654    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:23.772667    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:23.784529    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:23.784542    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:23.798542    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:23.798552    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:23.810240    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:23.810252    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:23.822403    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:23.822414    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:23.858105    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:23.858120    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:23.883070    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:23.883079    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:23.883109    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:43:23.883115    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:23.883124    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:23.883127    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:23.883130    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:29.683722    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:29.683920    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:29.700017    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:29.700123    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:29.713254    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:29.713330    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:29.724794    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:29.724866    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:29.735238    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:29.735324    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:29.746186    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:29.746270    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:29.757502    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:29.757580    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:29.768013    5160 logs.go:276] 0 containers: []
	W0927 10:43:29.768023    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:29.768087    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:29.777770    5160 logs.go:276] 0 containers: []
	W0927 10:43:29.777779    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:29.777786    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:29.777791    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:29.793068    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:29.793083    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:29.807532    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:29.807542    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:29.822156    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:29.822165    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:29.845016    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:29.845026    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:29.849592    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:29.849602    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:29.884748    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:29.884758    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:29.909431    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:29.909446    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:29.923480    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:29.923491    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:29.934891    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:29.934905    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:29.949311    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:29.949325    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:29.961132    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:29.961145    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:29.979647    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:29.979656    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:30.017338    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:30.017349    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:30.030486    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:30.030499    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:32.542057    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:33.887032    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:37.544417    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:37.544719    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:37.569675    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:37.569819    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:37.586911    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:37.587011    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:37.599709    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:37.599790    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:37.611402    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:37.611488    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:37.625355    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:37.625435    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:37.635900    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:37.635976    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:37.649214    5160 logs.go:276] 0 containers: []
	W0927 10:43:37.649228    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:37.649299    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:37.658946    5160 logs.go:276] 0 containers: []
	W0927 10:43:37.658956    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:37.658965    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:37.658972    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:37.677032    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:37.677046    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:37.690629    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:37.690642    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:37.708784    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:37.708796    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:37.732681    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:37.732687    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:37.746838    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:37.746853    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:37.761679    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:37.761689    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:37.774137    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:37.774148    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:37.785975    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:37.785986    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:37.790551    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:37.790560    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:37.814949    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:37.814961    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:37.829207    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:37.829215    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:37.868495    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:37.868506    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:37.902858    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:37.902869    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:37.914379    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:37.914390    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:38.889197    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:38.889486    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:38.914624    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:38.914751    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:38.933088    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:38.933181    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:38.947290    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:38.947381    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:38.958717    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:38.958800    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:38.969333    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:38.969409    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:38.979556    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:38.979637    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:38.989643    5001 logs.go:276] 0 containers: []
	W0927 10:43:38.989657    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:38.989731    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:39.000338    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:39.000356    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:39.000361    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:39.014888    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:39.014903    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:39.025918    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:39.025931    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:39.037461    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:39.037470    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:39.049503    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:39.049513    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:39.084922    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:39.084933    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:39.098986    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:39.098997    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:39.111722    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:39.111738    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:39.128201    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:39.128541    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:39.146052    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:39.146060    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:39.150830    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:39.150839    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:39.165384    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:39.165394    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:39.180227    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:39.180237    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:39.197228    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:39.197239    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:39.209384    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:39.209394    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:39.234471    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:39.234483    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:39.246162    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:39.246173    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:39.246202    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:43:39.246207    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:39.246220    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:39.246224    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:39.246228    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:40.427813    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:45.429954    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:45.430149    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:45.442613    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:45.442699    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:45.453335    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:45.453422    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:45.464015    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:45.464102    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:45.475581    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:45.475659    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:45.486190    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:45.486275    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:45.496990    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:45.497068    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:45.507214    5160 logs.go:276] 0 containers: []
	W0927 10:43:45.507226    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:45.507294    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:45.517821    5160 logs.go:276] 0 containers: []
	W0927 10:43:45.517832    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:45.517841    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:45.517846    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:45.522344    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:45.522372    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:45.555543    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:45.555554    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:45.570027    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:45.570039    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:45.593782    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:45.593792    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:45.605856    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:45.605866    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:45.644609    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:45.644620    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:45.658823    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:45.658835    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:45.670871    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:45.670882    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:45.684958    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:45.684971    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:45.699098    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:45.699110    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:45.710558    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:45.710568    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:45.725172    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:45.725183    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:45.748771    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:45.748782    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:45.771571    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:45.771584    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:48.285650    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:49.250106    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:53.288288    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:53.288823    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:53.330715    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:53.330861    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:53.353013    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:53.353108    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:53.367763    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:53.367853    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:53.380041    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:53.380130    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:53.390825    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:53.390901    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:53.401349    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:53.401434    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:53.411074    5160 logs.go:276] 0 containers: []
	W0927 10:43:53.411086    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:53.411156    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:53.422095    5160 logs.go:276] 0 containers: []
	W0927 10:43:53.422109    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:53.422118    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:53.422125    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:53.446753    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:53.446767    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:53.462062    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:53.462077    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:53.474729    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:53.474738    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:53.479023    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:53.479029    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:53.512920    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:53.512930    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:53.524827    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:53.524838    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:53.542326    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:53.542339    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:53.554796    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:53.554810    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:53.577031    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:53.577039    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:53.614284    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:53.614297    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:53.628795    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:53.628808    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:53.639974    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:53.639987    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:53.658538    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:53.658548    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:53.674149    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:53.674158    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:54.252395    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:54.252866    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:54.287569    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:43:54.287724    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:54.305376    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:43:54.305493    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:54.319162    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:43:54.319261    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:54.332325    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:43:54.332411    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:54.343332    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:43:54.343414    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:54.354270    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:43:54.354350    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:54.365272    5001 logs.go:276] 0 containers: []
	W0927 10:43:54.365283    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:54.365353    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:54.376503    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:43:54.376520    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:43:54.376524    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:43:54.391197    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:43:54.391211    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:43:54.406507    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:43:54.406520    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:43:54.424216    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:43:54.424232    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:43:54.443849    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:54.443861    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:54.449195    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:54.449208    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:54.485746    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:43:54.485760    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:43:54.497788    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:54.497799    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:54.522958    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:43:54.522966    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:54.534446    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:43:54.534459    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:43:54.548640    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:43:54.548650    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:43:54.560739    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:43:54.560752    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:43:54.573020    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:43:54.573031    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:43:54.584892    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:54.584905    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:43:54.601945    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:54.602042    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:54.618638    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:43:54.618645    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:43:54.630345    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:54.630358    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:43:54.630389    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:43:54.630394    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:43:54.630398    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:43:54.630401    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:43:54.630405    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:43:56.196283    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:01.198759    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:01.198843    5160 kubeadm.go:597] duration metric: took 4m2.909202s to restartPrimaryControlPlane
	W0927 10:44:01.198909    5160 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 10:44:01.198933    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0927 10:44:02.159506    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 10:44:02.164387    5160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:44:02.167462    5160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:44:02.170051    5160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 10:44:02.170057    5160 kubeadm.go:157] found existing configuration files:
	
	I0927 10:44:02.170087    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0927 10:44:02.172517    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 10:44:02.172543    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 10:44:02.175581    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0927 10:44:02.178041    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 10:44:02.178066    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 10:44:02.180625    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0927 10:44:02.183478    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 10:44:02.183506    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:44:02.186047    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0927 10:44:02.188706    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 10:44:02.188730    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:44:02.191868    5160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 10:44:02.210333    5160 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0927 10:44:02.210362    5160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 10:44:02.255815    5160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 10:44:02.255866    5160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 10:44:02.255927    5160 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 10:44:02.306493    5160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 10:44:02.310725    5160 out.go:235]   - Generating certificates and keys ...
	I0927 10:44:02.310766    5160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 10:44:02.310798    5160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 10:44:02.310840    5160 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 10:44:02.310872    5160 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 10:44:02.310909    5160 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 10:44:02.310943    5160 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 10:44:02.310976    5160 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 10:44:02.311008    5160 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 10:44:02.311061    5160 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 10:44:02.311119    5160 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 10:44:02.311138    5160 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 10:44:02.311175    5160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 10:44:02.354928    5160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 10:44:02.508589    5160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 10:44:02.641062    5160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 10:44:02.786495    5160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 10:44:02.815148    5160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 10:44:02.815554    5160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 10:44:02.815577    5160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 10:44:02.881875    5160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 10:44:02.886121    5160 out.go:235]   - Booting up control plane ...
	I0927 10:44:02.886167    5160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 10:44:02.886202    5160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 10:44:02.886235    5160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 10:44:02.886279    5160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 10:44:02.886351    5160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 10:44:04.634234    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:07.388480    5160 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501913 seconds
	I0927 10:44:07.388549    5160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 10:44:07.392092    5160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 10:44:07.917108    5160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 10:44:07.917455    5160 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-862000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 10:44:08.420672    5160 kubeadm.go:310] [bootstrap-token] Using token: grm2ho.lrxp2943rot0jvnk
	I0927 10:44:08.426701    5160 out.go:235]   - Configuring RBAC rules ...
	I0927 10:44:08.426762    5160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 10:44:08.426808    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 10:44:08.432240    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 10:44:08.433103    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 10:44:08.434021    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 10:44:08.434820    5160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 10:44:08.438180    5160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 10:44:08.607197    5160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 10:44:08.825232    5160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 10:44:08.825793    5160 kubeadm.go:310] 
	I0927 10:44:08.825822    5160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 10:44:08.825830    5160 kubeadm.go:310] 
	I0927 10:44:08.825865    5160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 10:44:08.825870    5160 kubeadm.go:310] 
	I0927 10:44:08.825892    5160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 10:44:08.825918    5160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 10:44:08.825989    5160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 10:44:08.825993    5160 kubeadm.go:310] 
	I0927 10:44:08.826019    5160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 10:44:08.826021    5160 kubeadm.go:310] 
	I0927 10:44:08.826056    5160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 10:44:08.826060    5160 kubeadm.go:310] 
	I0927 10:44:08.826087    5160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 10:44:08.826127    5160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 10:44:08.826170    5160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 10:44:08.826177    5160 kubeadm.go:310] 
	I0927 10:44:08.826214    5160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 10:44:08.826262    5160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 10:44:08.826267    5160 kubeadm.go:310] 
	I0927 10:44:08.826343    5160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token grm2ho.lrxp2943rot0jvnk \
	I0927 10:44:08.826394    5160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 \
	I0927 10:44:08.826406    5160 kubeadm.go:310] 	--control-plane 
	I0927 10:44:08.826408    5160 kubeadm.go:310] 
	I0927 10:44:08.826451    5160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 10:44:08.826453    5160 kubeadm.go:310] 
	I0927 10:44:08.826492    5160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token grm2ho.lrxp2943rot0jvnk \
	I0927 10:44:08.826542    5160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 
	I0927 10:44:08.828008    5160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 10:44:08.828122    5160 cni.go:84] Creating CNI manager for ""
	I0927 10:44:08.828133    5160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:44:08.831023    5160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 10:44:08.838132    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 10:44:08.840928    5160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 10:44:08.845697    5160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 10:44:08.845743    5160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 10:44:08.845763    5160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-862000 minikube.k8s.io/updated_at=2024_09_27T10_44_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=stopped-upgrade-862000 minikube.k8s.io/primary=true
	I0927 10:44:08.888425    5160 ops.go:34] apiserver oom_adj: -16
	I0927 10:44:08.888440    5160 kubeadm.go:1113] duration metric: took 42.73925ms to wait for elevateKubeSystemPrivileges
	I0927 10:44:08.888446    5160 kubeadm.go:394] duration metric: took 4m10.612389042s to StartCluster
	I0927 10:44:08.888456    5160 settings.go:142] acquiring lock: {Name:mk58fc55a93399a03fb1c9ac710554db41068524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:44:08.888542    5160 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:44:08.888950    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:44:08.889174    5160 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:44:08.889203    5160 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 10:44:08.889280    5160 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-862000"
	I0927 10:44:08.889287    5160 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-862000"
	I0927 10:44:08.889292    5160 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-862000"
	I0927 10:44:08.889303    5160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-862000"
	I0927 10:44:08.889272    5160 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	W0927 10:44:08.889294    5160 addons.go:243] addon storage-provisioner should already be in state true
	I0927 10:44:08.889400    5160 host.go:66] Checking if "stopped-upgrade-862000" exists ...
	I0927 10:44:08.890301    5160 kapi.go:59] client config for stopped-upgrade-862000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.key", CAFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a965d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 10:44:08.890441    5160 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-862000"
	W0927 10:44:08.890446    5160 addons.go:243] addon default-storageclass should already be in state true
	I0927 10:44:08.890453    5160 host.go:66] Checking if "stopped-upgrade-862000" exists ...
	I0927 10:44:08.893169    5160 out.go:177] * Verifying Kubernetes components...
	I0927 10:44:08.893503    5160 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 10:44:08.897155    5160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 10:44:08.897161    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:44:08.900940    5160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:44:08.904903    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:44:08.909000    5160 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:44:08.909007    5160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 10:44:08.909012    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:44:08.976161    5160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:44:08.981602    5160 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:44:08.981650    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:44:08.985409    5160 api_server.go:72] duration metric: took 96.226708ms to wait for apiserver process to appear ...
	I0927 10:44:08.985417    5160 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:44:08.985424    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:08.998489    5160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:44:09.032923    5160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 10:44:09.366508    5160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 10:44:09.366521    5160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 10:44:09.636319    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:09.636421    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:44:09.647440    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:44:09.647514    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:44:09.658688    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:44:09.658764    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:44:09.669163    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:44:09.669252    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:44:09.679882    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:44:09.679960    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:44:09.690567    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:44:09.690645    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:44:09.701260    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:44:09.701339    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:44:09.712401    5001 logs.go:276] 0 containers: []
	W0927 10:44:09.712414    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:44:09.712487    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:44:09.723473    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:44:09.723489    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:44:09.723494    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:44:09.736442    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:44:09.736453    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:44:09.808727    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:44:09.808739    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:44:09.823605    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:44:09.823621    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:44:09.844560    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:44:09.844573    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:44:09.855735    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:44:09.855748    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:44:09.867492    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:44:09.867504    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:44:09.891513    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:44:09.891520    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:44:09.902860    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:44:09.902872    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:44:09.919419    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:09.919517    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:09.936057    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:44:09.936062    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:44:09.940769    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:44:09.940777    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:44:09.952509    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:44:09.952519    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:44:09.964363    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:44:09.964372    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:44:09.982878    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:44:09.982892    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:44:10.000777    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:44:10.000785    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:44:10.012365    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:10.012376    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:44:10.012401    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:44:10.012405    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:10.012408    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:10.012411    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:10.012414    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:44:13.987493    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:13.987596    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:18.988336    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:18.988402    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:20.016309    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:23.988805    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:23.988827    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:25.018401    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:25.018569    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:44:25.034015    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:44:25.034106    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:44:25.044730    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:44:25.044811    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:44:25.055656    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:44:25.055744    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:44:25.066738    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:44:25.066826    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:44:25.077587    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:44:25.077673    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:44:25.088383    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:44:25.088465    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:44:25.102690    5001 logs.go:276] 0 containers: []
	W0927 10:44:25.102701    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:44:25.102779    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:44:25.113271    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:44:25.113290    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:44:25.113296    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:44:25.125114    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:44:25.125126    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:44:25.137095    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:44:25.137105    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:44:25.174902    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:44:25.174912    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:44:25.186704    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:44:25.186715    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:44:25.206702    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:44:25.206713    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:44:25.224160    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:44:25.224170    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:44:25.235709    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:44:25.235720    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:44:25.260129    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:44:25.260137    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:44:25.264414    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:44:25.264424    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:44:25.275893    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:44:25.275903    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:44:25.288180    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:44:25.288194    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:44:25.299819    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:44:25.299829    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:44:25.316773    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:25.316871    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:25.333413    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:44:25.333419    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:44:25.348150    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:44:25.348159    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:44:25.371639    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:25.371649    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:44:25.371677    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:44:25.371683    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:25.371686    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:25.371689    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:25.371692    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:44:28.989369    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:28.989426    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:33.990299    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:33.990356    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:35.374315    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:38.991513    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:38.991550    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0927 10:44:39.367990    5160 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0927 10:44:39.373383    5160 out.go:177] * Enabled addons: storage-provisioner
	I0927 10:44:39.382332    5160 addons.go:510] duration metric: took 30.493922084s for enable addons: enabled=[storage-provisioner]
	I0927 10:44:40.376478    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:40.376674    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:44:40.395466    5001 logs.go:276] 1 containers: [db3364becc55]
	I0927 10:44:40.395567    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:44:40.408654    5001 logs.go:276] 1 containers: [79ecd94fc513]
	I0927 10:44:40.408742    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:44:40.419336    5001 logs.go:276] 4 containers: [1a23b73a2911 5283f1859705 11c6047c72b7 71bf4fcc074d]
	I0927 10:44:40.419426    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:44:40.430132    5001 logs.go:276] 1 containers: [f893a254fe1a]
	I0927 10:44:40.430213    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:44:40.440895    5001 logs.go:276] 1 containers: [c2de07c82fe2]
	I0927 10:44:40.440975    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:44:40.451664    5001 logs.go:276] 1 containers: [a0bf227b0b07]
	I0927 10:44:40.451750    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:44:40.461873    5001 logs.go:276] 0 containers: []
	W0927 10:44:40.461883    5001 logs.go:278] No container was found matching "kindnet"
	I0927 10:44:40.461945    5001 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:44:40.472435    5001 logs.go:276] 1 containers: [36cd4fe08ecc]
	I0927 10:44:40.472451    5001 logs.go:123] Gathering logs for dmesg ...
	I0927 10:44:40.472456    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:44:40.476939    5001 logs.go:123] Gathering logs for coredns [1a23b73a2911] ...
	I0927 10:44:40.476946    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a23b73a2911"
	I0927 10:44:40.488881    5001 logs.go:123] Gathering logs for kube-scheduler [f893a254fe1a] ...
	I0927 10:44:40.488890    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f893a254fe1a"
	I0927 10:44:40.503629    5001 logs.go:123] Gathering logs for kube-controller-manager [a0bf227b0b07] ...
	I0927 10:44:40.503639    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0bf227b0b07"
	I0927 10:44:40.526351    5001 logs.go:123] Gathering logs for container status ...
	I0927 10:44:40.526362    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:44:40.538639    5001 logs.go:123] Gathering logs for kube-apiserver [db3364becc55] ...
	I0927 10:44:40.538648    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3364becc55"
	I0927 10:44:40.553064    5001 logs.go:123] Gathering logs for coredns [71bf4fcc074d] ...
	I0927 10:44:40.553073    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71bf4fcc074d"
	I0927 10:44:40.564624    5001 logs.go:123] Gathering logs for kube-proxy [c2de07c82fe2] ...
	I0927 10:44:40.564634    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2de07c82fe2"
	I0927 10:44:40.576208    5001 logs.go:123] Gathering logs for storage-provisioner [36cd4fe08ecc] ...
	I0927 10:44:40.576218    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36cd4fe08ecc"
	I0927 10:44:40.587486    5001 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:44:40.587495    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:44:40.624857    5001 logs.go:123] Gathering logs for etcd [79ecd94fc513] ...
	I0927 10:44:40.624867    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79ecd94fc513"
	I0927 10:44:40.639939    5001 logs.go:123] Gathering logs for coredns [11c6047c72b7] ...
	I0927 10:44:40.639949    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11c6047c72b7"
	I0927 10:44:40.651475    5001 logs.go:123] Gathering logs for Docker ...
	I0927 10:44:40.651486    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:44:40.675393    5001 logs.go:123] Gathering logs for kubelet ...
	I0927 10:44:40.675411    5001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 10:44:40.695332    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:40.695438    5001 logs.go:138] Found kubelet problem: Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:40.712483    5001 logs.go:123] Gathering logs for coredns [5283f1859705] ...
	I0927 10:44:40.712493    5001 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5283f1859705"
	I0927 10:44:40.724071    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:40.724082    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 10:44:40.724107    5001 out.go:270] X Problems detected in kubelet:
	W0927 10:44:40.724112    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: W0927 17:36:55.845083    3714 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	W0927 10:44:40.724115    5001 out.go:270]   Sep 27 17:36:55 running-upgrade-198000 kubelet[3714]: E0927 17:36:55.845094    3714 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-198000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-198000' and this object
	I0927 10:44:40.724118    5001 out.go:358] Setting ErrFile to fd 2...
	I0927 10:44:40.724121    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:44:43.992971    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:43.993012    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:48.994844    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:48.994874    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:50.726282    5001 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:53.996982    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:53.997018    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:55.728580    5001 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:55.732126    5001 out.go:201] 
	W0927 10:44:55.736136    5001 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0927 10:44:55.736150    5001 out.go:270] * 
	W0927 10:44:55.737147    5001 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:44:55.748094    5001 out.go:201] 
	I0927 10:44:58.999132    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:58.999174    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:04.001114    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:04.001169    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:09.002730    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:09.002902    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:09.031849    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:09.031943    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:09.050178    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:09.050262    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:09.061125    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:09.061210    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:09.071422    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:09.071502    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:09.081516    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:09.081598    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:09.091807    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:09.091890    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:09.102198    5160 logs.go:276] 0 containers: []
	W0927 10:45:09.102209    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:09.102277    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:09.112599    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:09.112613    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:09.112619    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:09.149706    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:09.149718    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:09.191086    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:09.191097    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:09.206132    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:09.206141    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:09.219658    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:09.219668    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:09.230870    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:09.230881    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:09.243663    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:09.243674    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:09.261045    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:09.261059    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:09.266290    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:09.266299    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:09.281348    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:09.281358    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:09.293355    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:09.293366    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:09.305455    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:09.305468    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:09.331449    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:09.331460    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-09-27 17:35:59 UTC, ends at Fri 2024-09-27 17:45:11 UTC. --
	Sep 27 17:44:53 running-upgrade-198000 dockerd[3201]: time="2024-09-27T17:44:53.893925444Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f8307ee2f3770aea972bc5fe4e3942001624dcb1b5cd013111342351ae812a9f pid=15597 runtime=io.containerd.runc.v2
	Sep 27 17:44:53 running-upgrade-198000 dockerd[3201]: time="2024-09-27T17:44:53.906542189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 17:44:53 running-upgrade-198000 dockerd[3201]: time="2024-09-27T17:44:53.906610309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 17:44:53 running-upgrade-198000 dockerd[3201]: time="2024-09-27T17:44:53.906621183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 17:44:53 running-upgrade-198000 dockerd[3201]: time="2024-09-27T17:44:53.906745966Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7e96391044340d6d1a8569486f03711498b96d73f79ab6295176a93f63f5cf0f pid=15621 runtime=io.containerd.runc.v2
	Sep 27 17:44:54 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:44:54Z" level=error msg="ContainerStats resp: {0x400094e600 linux}"
	Sep 27 17:44:54 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:44:54Z" level=error msg="ContainerStats resp: {0x40007fb080 linux}"
	Sep 27 17:44:54 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:44:54Z" level=error msg="ContainerStats resp: {0x400094ec80 linux}"
	Sep 27 17:44:54 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:44:54Z" level=error msg="ContainerStats resp: {0x40007fb880 linux}"
	Sep 27 17:44:54 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:44:54Z" level=error msg="ContainerStats resp: {0x400094e240 linux}"
	Sep 27 17:44:54 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:44:54Z" level=error msg="ContainerStats resp: {0x400094e6c0 linux}"
	Sep 27 17:44:56 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:44:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 27 17:45:01 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 27 17:45:04 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:04Z" level=error msg="ContainerStats resp: {0x40007ab280 linux}"
	Sep 27 17:45:04 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:04Z" level=error msg="ContainerStats resp: {0x40007abc00 linux}"
	Sep 27 17:45:05 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:05Z" level=error msg="ContainerStats resp: {0x40007ab540 linux}"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=error msg="ContainerStats resp: {0x4000628740 linux}"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=error msg="ContainerStats resp: {0x40007fb9c0 linux}"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=error msg="ContainerStats resp: {0x4000629140 linux}"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=error msg="ContainerStats resp: {0x4000629400 linux}"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=error msg="ContainerStats resp: {0x4000629600 linux}"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=error msg="ContainerStats resp: {0x4000629a80 linux}"
	Sep 27 17:45:06 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:06Z" level=error msg="ContainerStats resp: {0x40006be180 linux}"
	Sep 27 17:45:11 running-upgrade-198000 cri-dockerd[3041]: time="2024-09-27T17:45:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7e96391044340       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   6eb323928f459
	f8307ee2f3770       edaa71f2aee88       18 seconds ago      Running             coredns                   2                   e27b31acb0824
	1a23b73a2911b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6eb323928f459
	5283f18597059       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   e27b31acb0824
	36cd4fe08eccc       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   4616989476e85
	c2de07c82fe21       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   76fb58a7bc84e
	f893a254fe1a0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   b6b7b7046dda1
	a0bf227b0b07b       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   d30dc82f74b07
	db3364becc558       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   289e1d3ddf196
	79ecd94fc5131       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   8b0b2f1e80b8b
	
	
	==> coredns [1a23b73a2911] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:44038->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:44349->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:48169->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:34588->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:33275->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:48254->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:59303->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:35010->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:46462->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3399009883512721015.503696998647434745. HINFO: read udp 10.244.0.3:35249->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5283f1859705] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:44609->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:36505->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:50085->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:35021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:56570->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:49950->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:39829->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:60093->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:57685->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8142757867754633261.1123866206059710787. HINFO: read udp 10.244.0.2:52219->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7e9639104434] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7409940714977390836.4404302255741706999. HINFO: read udp 10.244.0.3:43905->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7409940714977390836.4404302255741706999. HINFO: read udp 10.244.0.3:52695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7409940714977390836.4404302255741706999. HINFO: read udp 10.244.0.3:37311->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7409940714977390836.4404302255741706999. HINFO: read udp 10.244.0.3:51114->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f8307ee2f377] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4846385195158136635.2051467989139565146. HINFO: read udp 10.244.0.2:41968->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4846385195158136635.2051467989139565146. HINFO: read udp 10.244.0.2:36179->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4846385195158136635.2051467989139565146. HINFO: read udp 10.244.0.2:49538->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4846385195158136635.2051467989139565146. HINFO: read udp 10.244.0.2:60646->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-198000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-198000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=running-upgrade-198000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T10_40_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:40:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-198000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:45:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:40:51 +0000   Fri, 27 Sep 2024 17:40:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:40:51 +0000   Fri, 27 Sep 2024 17:40:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:40:51 +0000   Fri, 27 Sep 2024 17:40:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:40:51 +0000   Fri, 27 Sep 2024 17:40:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-198000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ffa0d9a3dd64d43bbb167fa725ff8f5
	  System UUID:                5ffa0d9a3dd64d43bbb167fa725ff8f5
	  Boot ID:                    9bfc1448-ea81-4a69-abfa-d588ffa6c68f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-rsv94                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 coredns-6d4b75cb6d-tv8tl                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 etcd-running-upgrade-198000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-198000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-198000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-hznc2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-198000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-198000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-198000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-198000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-198000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-198000 event: Registered Node running-upgrade-198000 in Controller
	
	
	==> dmesg <==
	[  +1.312339] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.062634] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.064182] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.224451] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +0.072740] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +2.358586] systemd-fstab-generator[1293]: Ignoring "noauto" for root device
	[  +0.361568] kauditd_printk_skb: 92 callbacks suppressed
	[  +9.296972] systemd-fstab-generator[1939]: Ignoring "noauto" for root device
	[  +2.571009] systemd-fstab-generator[2215]: Ignoring "noauto" for root device
	[  +0.144973] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +0.072847] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.101442] systemd-fstab-generator[2274]: Ignoring "noauto" for root device
	[  +2.603814] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.194311] systemd-fstab-generator[2997]: Ignoring "noauto" for root device
	[  +0.060015] systemd-fstab-generator[3009]: Ignoring "noauto" for root device
	[  +0.086916] systemd-fstab-generator[3020]: Ignoring "noauto" for root device
	[  +0.065598] systemd-fstab-generator[3034]: Ignoring "noauto" for root device
	[  +2.226545] systemd-fstab-generator[3188]: Ignoring "noauto" for root device
	[  +3.688655] systemd-fstab-generator[3580]: Ignoring "noauto" for root device
	[  +0.837037] systemd-fstab-generator[3707]: Ignoring "noauto" for root device
	[ +20.022764] kauditd_printk_skb: 68 callbacks suppressed
	[Sep27 17:40] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.522847] systemd-fstab-generator[10047]: Ignoring "noauto" for root device
	[  +5.635603] systemd-fstab-generator[10661]: Ignoring "noauto" for root device
	[  +0.463304] systemd-fstab-generator[10794]: Ignoring "noauto" for root device
	
	
	==> etcd [79ecd94fc513] <==
	{"level":"info","ts":"2024-09-27T17:40:46.423Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-27T17:40:46.423Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-27T17:40:46.423Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-27T17:40:46.423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-27T17:40:46.423Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-27T17:40:46.424Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T17:40:46.424Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T17:40:47.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T17:40:47.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T17:40:47.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-27T17:40:47.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:47.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:47.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:47.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:47.218Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:40:47.221Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:40:47.221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:40:47.221Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:40:47.222Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-198000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T17:40:47.222Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:40:47.224Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T17:40:47.227Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:40:47.230Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-27T17:40:47.230Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T17:40:47.230Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:45:12 up 9 min,  0 users,  load average: 0.24, 0.36, 0.24
	Linux running-upgrade-198000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [db3364becc55] <==
	I0927 17:40:48.540896       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0927 17:40:48.558760       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0927 17:40:48.559941       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 17:40:48.560689       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0927 17:40:48.560759       1 cache.go:39] Caches are synced for autoregister controller
	I0927 17:40:48.565183       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0927 17:40:48.589700       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0927 17:40:49.288881       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0927 17:40:49.465368       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0927 17:40:49.467754       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0927 17:40:49.467773       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 17:40:49.593054       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 17:40:49.604983       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 17:40:49.626352       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0927 17:40:49.628367       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0927 17:40:49.628736       1 controller.go:611] quota admission added evaluator for: endpoints
	I0927 17:40:49.630037       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 17:40:50.603128       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0927 17:40:51.143375       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0927 17:40:51.149909       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0927 17:40:51.154242       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0927 17:40:51.196562       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 17:41:05.411650       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0927 17:41:05.510932       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0927 17:41:05.919328       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [a0bf227b0b07] <==
	I0927 17:41:04.866798       1 shared_informer.go:262] Caches are synced for resource quota
	I0927 17:41:04.868906       1 shared_informer.go:262] Caches are synced for resource quota
	I0927 17:41:04.906280       1 shared_informer.go:262] Caches are synced for job
	I0927 17:41:04.909146       1 shared_informer.go:262] Caches are synced for ephemeral
	I0927 17:41:04.909150       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0927 17:41:04.909165       1 shared_informer.go:262] Caches are synced for disruption
	I0927 17:41:04.909239       1 disruption.go:371] Sending events to api server.
	I0927 17:41:04.909169       1 shared_informer.go:262] Caches are synced for GC
	I0927 17:41:04.909171       1 shared_informer.go:262] Caches are synced for PVC protection
	I0927 17:41:04.909177       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0927 17:41:04.910254       1 shared_informer.go:262] Caches are synced for taint
	I0927 17:41:04.910283       1 shared_informer.go:262] Caches are synced for deployment
	I0927 17:41:04.910322       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0927 17:41:04.910364       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-198000. Assuming now as a timestamp.
	I0927 17:41:04.910400       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0927 17:41:04.910429       1 event.go:294] "Event occurred" object="running-upgrade-198000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-198000 event: Registered Node running-upgrade-198000 in Controller"
	I0927 17:41:04.910262       1 shared_informer.go:262] Caches are synced for endpoint
	I0927 17:41:04.910487       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0927 17:41:05.278796       1 shared_informer.go:262] Caches are synced for garbage collector
	I0927 17:41:05.309963       1 shared_informer.go:262] Caches are synced for garbage collector
	I0927 17:41:05.309971       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0927 17:41:05.415396       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hznc2"
	I0927 17:41:05.512129       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0927 17:41:05.661602       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rsv94"
	I0927 17:41:05.664010       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-tv8tl"
	
	
	==> kube-proxy [c2de07c82fe2] <==
	I0927 17:41:05.890804       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0927 17:41:05.890828       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0927 17:41:05.890838       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0927 17:41:05.915286       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0927 17:41:05.915296       1 server_others.go:206] "Using iptables Proxier"
	I0927 17:41:05.915310       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0927 17:41:05.915401       1 server.go:661] "Version info" version="v1.24.1"
	I0927 17:41:05.915406       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:41:05.917672       1 config.go:317] "Starting service config controller"
	I0927 17:41:05.917689       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0927 17:41:05.917702       1 config.go:226] "Starting endpoint slice config controller"
	I0927 17:41:05.917708       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0927 17:41:05.918036       1 config.go:444] "Starting node config controller"
	I0927 17:41:05.918190       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0927 17:41:06.018103       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0927 17:41:06.018130       1 shared_informer.go:262] Caches are synced for service config
	I0927 17:41:06.018255       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f893a254fe1a] <==
	W0927 17:40:48.519436       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 17:40:48.519459       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0927 17:40:48.519530       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 17:40:48.519537       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0927 17:40:48.519561       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 17:40:48.519607       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0927 17:40:48.519647       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 17:40:48.519654       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0927 17:40:48.519706       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:48.519713       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0927 17:40:48.519772       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 17:40:48.519793       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0927 17:40:48.519856       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 17:40:48.519882       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0927 17:40:48.519957       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:48.519964       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0927 17:40:49.360068       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:49.360192       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0927 17:40:49.398720       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 17:40:49.398794       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0927 17:40:49.427655       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:49.427673       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0927 17:40:49.454755       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:49.454821       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0927 17:40:50.110549       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-09-27 17:35:59 UTC, ends at Fri 2024-09-27 17:45:12 UTC. --
	Sep 27 17:40:52 running-upgrade-198000 kubelet[10667]: E0927 17:40:52.976500   10667 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-198000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-198000"
	Sep 27 17:40:53 running-upgrade-198000 kubelet[10667]: E0927 17:40:53.174864   10667 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-198000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-198000"
	Sep 27 17:40:53 running-upgrade-198000 kubelet[10667]: I0927 17:40:53.370437   10667 request.go:601] Waited for 1.143975657s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 27 17:40:53 running-upgrade-198000 kubelet[10667]: E0927 17:40:53.376068   10667 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-198000\" already exists" pod="kube-system/etcd-running-upgrade-198000"
	Sep 27 17:41:04 running-upgrade-198000 kubelet[10667]: I0927 17:41:04.620620   10667 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 27 17:41:04 running-upgrade-198000 kubelet[10667]: I0927 17:41:04.620903   10667 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 27 17:41:04 running-upgrade-198000 kubelet[10667]: I0927 17:41:04.916226   10667 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.023396   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfms\" (UniqueName: \"kubernetes.io/projected/aea8c63d-c277-4308-917d-9cf9efcceb7b-kube-api-access-cqfms\") pod \"storage-provisioner\" (UID: \"aea8c63d-c277-4308-917d-9cf9efcceb7b\") " pod="kube-system/storage-provisioner"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.023497   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aea8c63d-c277-4308-917d-9cf9efcceb7b-tmp\") pod \"storage-provisioner\" (UID: \"aea8c63d-c277-4308-917d-9cf9efcceb7b\") " pod="kube-system/storage-provisioner"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: E0927 17:41:05.127059   10667 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: E0927 17:41:05.127078   10667 projected.go:192] Error preparing data for projected volume kube-api-access-cqfms for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: E0927 17:41:05.127114   10667 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/aea8c63d-c277-4308-917d-9cf9efcceb7b-kube-api-access-cqfms podName:aea8c63d-c277-4308-917d-9cf9efcceb7b nodeName:}" failed. No retries permitted until 2024-09-27 17:41:05.627100046 +0000 UTC m=+14.495589352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqfms" (UniqueName: "kubernetes.io/projected/aea8c63d-c277-4308-917d-9cf9efcceb7b-kube-api-access-cqfms") pod "storage-provisioner" (UID: "aea8c63d-c277-4308-917d-9cf9efcceb7b") : configmap "kube-root-ca.crt" not found
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.419828   10667 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.426043   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ca07527-b926-4696-951b-73b08fe57e14-kube-proxy\") pod \"kube-proxy-hznc2\" (UID: \"4ca07527-b926-4696-951b-73b08fe57e14\") " pod="kube-system/kube-proxy-hznc2"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.426146   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ca07527-b926-4696-951b-73b08fe57e14-xtables-lock\") pod \"kube-proxy-hznc2\" (UID: \"4ca07527-b926-4696-951b-73b08fe57e14\") " pod="kube-system/kube-proxy-hznc2"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.426179   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ca07527-b926-4696-951b-73b08fe57e14-lib-modules\") pod \"kube-proxy-hznc2\" (UID: \"4ca07527-b926-4696-951b-73b08fe57e14\") " pod="kube-system/kube-proxy-hznc2"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.426222   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5pvv\" (UniqueName: \"kubernetes.io/projected/4ca07527-b926-4696-951b-73b08fe57e14-kube-api-access-m5pvv\") pod \"kube-proxy-hznc2\" (UID: \"4ca07527-b926-4696-951b-73b08fe57e14\") " pod="kube-system/kube-proxy-hznc2"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.667415   10667 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.670993   10667 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.829076   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e908d4df-5953-419a-8694-dbc441c045d2-config-volume\") pod \"coredns-6d4b75cb6d-rsv94\" (UID: \"e908d4df-5953-419a-8694-dbc441c045d2\") " pod="kube-system/coredns-6d4b75cb6d-rsv94"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.829105   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmwh\" (UniqueName: \"kubernetes.io/projected/e908d4df-5953-419a-8694-dbc441c045d2-kube-api-access-lrmwh\") pod \"coredns-6d4b75cb6d-rsv94\" (UID: \"e908d4df-5953-419a-8694-dbc441c045d2\") " pod="kube-system/coredns-6d4b75cb6d-rsv94"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.829116   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2be4c22e-c006-4572-8a03-6b5746b7fecb-config-volume\") pod \"coredns-6d4b75cb6d-tv8tl\" (UID: \"2be4c22e-c006-4572-8a03-6b5746b7fecb\") " pod="kube-system/coredns-6d4b75cb6d-tv8tl"
	Sep 27 17:41:05 running-upgrade-198000 kubelet[10667]: I0927 17:41:05.829129   10667 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn48w\" (UniqueName: \"kubernetes.io/projected/2be4c22e-c006-4572-8a03-6b5746b7fecb-kube-api-access-kn48w\") pod \"coredns-6d4b75cb6d-tv8tl\" (UID: \"2be4c22e-c006-4572-8a03-6b5746b7fecb\") " pod="kube-system/coredns-6d4b75cb6d-tv8tl"
	Sep 27 17:44:53 running-upgrade-198000 kubelet[10667]: I0927 17:44:53.830657   10667 scope.go:110] "RemoveContainer" containerID="71bf4fcc074d4305ae8ebbaed315d37434e8c59915f146da1b70b5355a56cb78"
	Sep 27 17:44:53 running-upgrade-198000 kubelet[10667]: I0927 17:44:53.841346   10667 scope.go:110] "RemoveContainer" containerID="11c6047c72b75a9d96f316a4046df6f742f9fb945d113f921872ee90cdd445df"
	
	
	==> storage-provisioner [36cd4fe08ecc] <==
	I0927 17:41:06.079323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 17:41:06.090846       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 17:41:06.091154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 17:41:06.095279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 17:41:06.095313       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77c70a8d-1352-42f2-9804-512dca8c6735", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-198000_e3ccec0d-42c7-4bb7-b342-c053b1e73dcd became leader
	I0927 17:41:06.095440       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-198000_e3ccec0d-42c7-4bb7-b342-c053b1e73dcd!
	I0927 17:41:06.195590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-198000_e3ccec0d-42c7-4bb7-b342-c053b1e73dcd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-198000 -n running-upgrade-198000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-198000 -n running-upgrade-198000: exit status 2 (15.762573333s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-198000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-198000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-198000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-198000: (1.112108875s)
--- FAIL: TestRunningBinaryUpgrade (600.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-768000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-768000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.913978292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-768000" primary control-plane node in "kubernetes-upgrade-768000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-768000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:38:28.469540    5085 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:38:28.469674    5085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:28.469678    5085 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:28.469680    5085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:28.469801    5085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:38:28.470837    5085 out.go:352] Setting JSON to false
	I0927 10:38:28.487713    5085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4072,"bootTime":1727454636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:38:28.487790    5085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:38:28.493682    5085 out.go:177] * [kubernetes-upgrade-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:38:28.501726    5085 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:38:28.501768    5085 notify.go:220] Checking for updates...
	I0927 10:38:28.508735    5085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:38:28.510105    5085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:38:28.513649    5085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:38:28.516777    5085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:38:28.518090    5085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:38:28.521015    5085 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:38:28.521078    5085 config.go:182] Loaded profile config "running-upgrade-198000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:38:28.521125    5085 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:38:28.524664    5085 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:38:28.529634    5085 start.go:297] selected driver: qemu2
	I0927 10:38:28.529642    5085 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:38:28.529648    5085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:38:28.531733    5085 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:38:28.534746    5085 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:38:28.538609    5085 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 10:38:28.538623    5085 cni.go:84] Creating CNI manager for ""
	I0927 10:38:28.538644    5085 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 10:38:28.538671    5085 start.go:340] cluster config:
	{Name:kubernetes-upgrade-768000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:38:28.542174    5085 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:38:28.550716    5085 out.go:177] * Starting "kubernetes-upgrade-768000" primary control-plane node in "kubernetes-upgrade-768000" cluster
	I0927 10:38:28.554691    5085 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 10:38:28.554703    5085 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 10:38:28.554713    5085 cache.go:56] Caching tarball of preloaded images
	I0927 10:38:28.554761    5085 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:38:28.554766    5085 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0927 10:38:28.554811    5085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/kubernetes-upgrade-768000/config.json ...
	I0927 10:38:28.554820    5085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/kubernetes-upgrade-768000/config.json: {Name:mkb9b29498fd9c2460c1e16949610454a228e712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:38:28.555178    5085 start.go:360] acquireMachinesLock for kubernetes-upgrade-768000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:38:28.555208    5085 start.go:364] duration metric: took 24.291µs to acquireMachinesLock for "kubernetes-upgrade-768000"
	I0927 10:38:28.555219    5085 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-768000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:38:28.555246    5085 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:38:28.558719    5085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:38:28.574039    5085 start.go:159] libmachine.API.Create for "kubernetes-upgrade-768000" (driver="qemu2")
	I0927 10:38:28.574067    5085 client.go:168] LocalClient.Create starting
	I0927 10:38:28.574148    5085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:38:28.574178    5085 main.go:141] libmachine: Decoding PEM data...
	I0927 10:38:28.574186    5085 main.go:141] libmachine: Parsing certificate...
	I0927 10:38:28.574222    5085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:38:28.574244    5085 main.go:141] libmachine: Decoding PEM data...
	I0927 10:38:28.574253    5085 main.go:141] libmachine: Parsing certificate...
	I0927 10:38:28.574667    5085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:38:28.766097    5085 main.go:141] libmachine: Creating SSH key...
	I0927 10:38:28.830675    5085 main.go:141] libmachine: Creating Disk image...
	I0927 10:38:28.830681    5085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:38:28.830897    5085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:28.840225    5085 main.go:141] libmachine: STDOUT: 
	I0927 10:38:28.840247    5085 main.go:141] libmachine: STDERR: 
	I0927 10:38:28.840299    5085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2 +20000M
	I0927 10:38:28.848437    5085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:38:28.848462    5085 main.go:141] libmachine: STDERR: 
	I0927 10:38:28.848477    5085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:28.848482    5085 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:38:28.848495    5085 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:38:28.848523    5085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:d1:4c:1e:25:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:28.850139    5085 main.go:141] libmachine: STDOUT: 
	I0927 10:38:28.850156    5085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:38:28.850182    5085 client.go:171] duration metric: took 276.11675ms to LocalClient.Create
	I0927 10:38:30.852311    5085 start.go:128] duration metric: took 2.297096417s to createHost
	I0927 10:38:30.852387    5085 start.go:83] releasing machines lock for "kubernetes-upgrade-768000", held for 2.297228542s
	W0927 10:38:30.852473    5085 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:38:30.865972    5085 out.go:177] * Deleting "kubernetes-upgrade-768000" in qemu2 ...
	W0927 10:38:30.900689    5085 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:38:30.900717    5085 start.go:729] Will try again in 5 seconds ...
	I0927 10:38:35.902809    5085 start.go:360] acquireMachinesLock for kubernetes-upgrade-768000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:38:35.903479    5085 start.go:364] duration metric: took 557.125µs to acquireMachinesLock for "kubernetes-upgrade-768000"
	I0927 10:38:35.903626    5085 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-768000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:38:35.903921    5085 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:38:35.925712    5085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:38:35.971174    5085 start.go:159] libmachine.API.Create for "kubernetes-upgrade-768000" (driver="qemu2")
	I0927 10:38:35.971235    5085 client.go:168] LocalClient.Create starting
	I0927 10:38:35.971355    5085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:38:35.971432    5085 main.go:141] libmachine: Decoding PEM data...
	I0927 10:38:35.971447    5085 main.go:141] libmachine: Parsing certificate...
	I0927 10:38:35.971516    5085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:38:35.971562    5085 main.go:141] libmachine: Decoding PEM data...
	I0927 10:38:35.971578    5085 main.go:141] libmachine: Parsing certificate...
	I0927 10:38:35.972287    5085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:38:36.138506    5085 main.go:141] libmachine: Creating SSH key...
	I0927 10:38:36.284794    5085 main.go:141] libmachine: Creating Disk image...
	I0927 10:38:36.284802    5085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:38:36.285239    5085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:36.294540    5085 main.go:141] libmachine: STDOUT: 
	I0927 10:38:36.294563    5085 main.go:141] libmachine: STDERR: 
	I0927 10:38:36.294620    5085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2 +20000M
	I0927 10:38:36.302458    5085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:38:36.302480    5085 main.go:141] libmachine: STDERR: 
	I0927 10:38:36.302491    5085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:36.302497    5085 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:38:36.302507    5085 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:38:36.302539    5085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:41:5f:6d:96:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:36.304238    5085 main.go:141] libmachine: STDOUT: 
	I0927 10:38:36.304251    5085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:38:36.304265    5085 client.go:171] duration metric: took 333.031416ms to LocalClient.Create
	I0927 10:38:38.306432    5085 start.go:128] duration metric: took 2.402536084s to createHost
	I0927 10:38:38.306514    5085 start.go:83] releasing machines lock for "kubernetes-upgrade-768000", held for 2.403073125s
	W0927 10:38:38.306953    5085 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-768000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-768000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:38:38.317469    5085 out.go:201] 
	W0927 10:38:38.328785    5085 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:38:38.328852    5085 out.go:270] * 
	* 
	W0927 10:38:38.330825    5085 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:38:38.341655    5085 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-768000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-768000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-768000: (1.932020709s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-768000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-768000 status --format={{.Host}}: exit status 7 (33.135791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-768000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-768000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.188427417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-768000" primary control-plane node in "kubernetes-upgrade-768000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-768000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-768000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:38:40.353712    5118 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:38:40.353867    5118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:40.353870    5118 out.go:358] Setting ErrFile to fd 2...
	I0927 10:38:40.353873    5118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:38:40.354022    5118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:38:40.355238    5118 out.go:352] Setting JSON to false
	I0927 10:38:40.373431    5118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4084,"bootTime":1727454636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:38:40.373536    5118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:38:40.377862    5118 out.go:177] * [kubernetes-upgrade-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:38:40.381986    5118 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:38:40.382087    5118 notify.go:220] Checking for updates...
	I0927 10:38:40.387858    5118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:38:40.390931    5118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:38:40.393916    5118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:38:40.400926    5118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:38:40.403891    5118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:38:40.407165    5118 config.go:182] Loaded profile config "kubernetes-upgrade-768000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0927 10:38:40.407437    5118 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:38:40.410776    5118 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:38:40.417918    5118 start.go:297] selected driver: qemu2
	I0927 10:38:40.417927    5118 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-768000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:38:40.417981    5118 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:38:40.420388    5118 cni.go:84] Creating CNI manager for ""
	I0927 10:38:40.420421    5118 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:38:40.420441    5118 start.go:340] cluster config:
	{Name:kubernetes-upgrade-768000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-768000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:38:40.424017    5118 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:38:40.430833    5118 out.go:177] * Starting "kubernetes-upgrade-768000" primary control-plane node in "kubernetes-upgrade-768000" cluster
	I0927 10:38:40.434822    5118 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:38:40.434850    5118 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:38:40.434856    5118 cache.go:56] Caching tarball of preloaded images
	I0927 10:38:40.434934    5118 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:38:40.434940    5118 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:38:40.434998    5118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/kubernetes-upgrade-768000/config.json ...
	I0927 10:38:40.435405    5118 start.go:360] acquireMachinesLock for kubernetes-upgrade-768000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:38:40.435438    5118 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "kubernetes-upgrade-768000"
	I0927 10:38:40.435447    5118 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:38:40.435453    5118 fix.go:54] fixHost starting: 
	I0927 10:38:40.435562    5118 fix.go:112] recreateIfNeeded on kubernetes-upgrade-768000: state=Stopped err=<nil>
	W0927 10:38:40.435570    5118 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:38:40.439866    5118 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-768000" ...
	I0927 10:38:40.447877    5118 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:38:40.447921    5118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:41:5f:6d:96:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:40.450070    5118 main.go:141] libmachine: STDOUT: 
	I0927 10:38:40.450116    5118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:38:40.450151    5118 fix.go:56] duration metric: took 14.697792ms for fixHost
	I0927 10:38:40.450156    5118 start.go:83] releasing machines lock for "kubernetes-upgrade-768000", held for 14.713667ms
	W0927 10:38:40.450172    5118 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:38:40.450211    5118 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:38:40.450218    5118 start.go:729] Will try again in 5 seconds ...
	I0927 10:38:45.452243    5118 start.go:360] acquireMachinesLock for kubernetes-upgrade-768000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:38:45.452731    5118 start.go:364] duration metric: took 405.709µs to acquireMachinesLock for "kubernetes-upgrade-768000"
	I0927 10:38:45.452888    5118 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:38:45.452909    5118 fix.go:54] fixHost starting: 
	I0927 10:38:45.453616    5118 fix.go:112] recreateIfNeeded on kubernetes-upgrade-768000: state=Stopped err=<nil>
	W0927 10:38:45.453641    5118 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:38:45.457666    5118 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-768000" ...
	I0927 10:38:45.466155    5118 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:38:45.466350    5118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:41:5f:6d:96:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubernetes-upgrade-768000/disk.qcow2
	I0927 10:38:45.475864    5118 main.go:141] libmachine: STDOUT: 
	I0927 10:38:45.475931    5118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:38:45.476031    5118 fix.go:56] duration metric: took 23.124125ms for fixHost
	I0927 10:38:45.476050    5118 start.go:83] releasing machines lock for "kubernetes-upgrade-768000", held for 23.296209ms
	W0927 10:38:45.476245    5118 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-768000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-768000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:38:45.483031    5118 out.go:201] 
	W0927 10:38:45.487138    5118 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:38:45.487153    5118 out.go:270] * 
	* 
	W0927 10:38:45.489124    5118 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:38:45.498130    5118 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-768000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-768000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-768000 version --output=json: exit status 1 (61.331208ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-768000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-27 10:38:45.574646 -0700 PDT m=+2626.955828960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-768000 -n kubernetes-upgrade-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-768000 -n kubernetes-upgrade-768000: exit status 7 (33.224583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-768000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-768000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-768000
--- FAIL: TestKubernetesUpgrade (17.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.78s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19712
- KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3467343586/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.78s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19712
- KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3383595701/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (563.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2199124660 start -p stopped-upgrade-862000 --memory=2200 --vm-driver=qemu2 
E0927 10:38:53.609529    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:39:12.485512    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2199124660 start -p stopped-upgrade-862000 --memory=2200 --vm-driver=qemu2 : (40.038237291s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2199124660 -p stopped-upgrade-862000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2199124660 -p stopped-upgrade-862000 stop: (3.111377291s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-862000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0927 10:43:53.601541    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:44:12.476366    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-862000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.04802425s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-862000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-862000" primary control-plane node in "stopped-upgrade-862000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-862000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:39:30.163370    5160 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:39:30.163518    5160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:30.163521    5160 out.go:358] Setting ErrFile to fd 2...
	I0927 10:39:30.163524    5160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:39:30.163681    5160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:39:30.164772    5160 out.go:352] Setting JSON to false
	I0927 10:39:30.184007    5160 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4134,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:39:30.184085    5160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:39:30.188862    5160 out.go:177] * [stopped-upgrade-862000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:39:30.196780    5160 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:39:30.196817    5160 notify.go:220] Checking for updates...
	I0927 10:39:30.203879    5160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:39:30.206866    5160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:39:30.209902    5160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:39:30.212841    5160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:39:30.215821    5160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:39:30.219068    5160 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:39:30.222865    5160 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 10:39:30.225864    5160 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:39:30.229871    5160 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:39:30.236746    5160 start.go:297] selected driver: qemu2
	I0927 10:39:30.236750    5160 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:39:30.236798    5160 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:39:30.239440    5160 cni.go:84] Creating CNI manager for ""
	I0927 10:39:30.239474    5160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:39:30.239497    5160 start.go:340] cluster config:
	{Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:39:30.239544    5160 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:39:30.247702    5160 out.go:177] * Starting "stopped-upgrade-862000" primary control-plane node in "stopped-upgrade-862000" cluster
	I0927 10:39:30.251796    5160 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0927 10:39:30.251808    5160 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0927 10:39:30.251812    5160 cache.go:56] Caching tarball of preloaded images
	I0927 10:39:30.251855    5160 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:39:30.251860    5160 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0927 10:39:30.251910    5160 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/config.json ...
	I0927 10:39:30.252385    5160 start.go:360] acquireMachinesLock for stopped-upgrade-862000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:39:30.252411    5160 start.go:364] duration metric: took 20.625µs to acquireMachinesLock for "stopped-upgrade-862000"
	I0927 10:39:30.252418    5160 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:39:30.252422    5160 fix.go:54] fixHost starting: 
	I0927 10:39:30.252517    5160 fix.go:112] recreateIfNeeded on stopped-upgrade-862000: state=Stopped err=<nil>
	W0927 10:39:30.252525    5160 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:39:30.260780    5160 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-862000" ...
	I0927 10:39:30.264838    5160 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:39:30.264908    5160 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50492-:22,hostfwd=tcp::50493-:2376,hostname=stopped-upgrade-862000 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/disk.qcow2
	I0927 10:39:30.311158    5160 main.go:141] libmachine: STDOUT: 
	I0927 10:39:30.311189    5160 main.go:141] libmachine: STDERR: 
	I0927 10:39:30.311194    5160 main.go:141] libmachine: Waiting for VM to start (ssh -p 50492 docker@127.0.0.1)...
	I0927 10:39:49.901445    5160 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/config.json ...
	I0927 10:39:49.902285    5160 machine.go:93] provisionDockerMachine start ...
	I0927 10:39:49.902485    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:49.902894    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:49.902910    5160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 10:39:49.987240    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 10:39:49.987272    5160 buildroot.go:166] provisioning hostname "stopped-upgrade-862000"
	I0927 10:39:49.987414    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:49.987676    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:49.987692    5160 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-862000 && echo "stopped-upgrade-862000" | sudo tee /etc/hostname
	I0927 10:39:50.061999    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-862000
	
	I0927 10:39:50.062074    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.062223    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.062234    5160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-862000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-862000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-862000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 10:39:50.127255    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 10:39:50.127269    5160 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19712-1508/.minikube CaCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19712-1508/.minikube}
	I0927 10:39:50.127280    5160 buildroot.go:174] setting up certificates
	I0927 10:39:50.127290    5160 provision.go:84] configureAuth start
	I0927 10:39:50.127296    5160 provision.go:143] copyHostCerts
	I0927 10:39:50.127363    5160 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem, removing ...
	I0927 10:39:50.127368    5160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem
	I0927 10:39:50.127476    5160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/key.pem (1679 bytes)
	I0927 10:39:50.127659    5160 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem, removing ...
	I0927 10:39:50.127662    5160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem
	I0927 10:39:50.127704    5160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.pem (1078 bytes)
	I0927 10:39:50.127799    5160 exec_runner.go:144] found /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem, removing ...
	I0927 10:39:50.127803    5160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem
	I0927 10:39:50.127840    5160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19712-1508/.minikube/cert.pem (1123 bytes)
	I0927 10:39:50.127928    5160 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-862000 san=[127.0.0.1 localhost minikube stopped-upgrade-862000]
	I0927 10:39:50.241825    5160 provision.go:177] copyRemoteCerts
	I0927 10:39:50.241873    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 10:39:50.241883    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:39:50.275309    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 10:39:50.281755    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 10:39:50.288440    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0927 10:39:50.295515    5160 provision.go:87] duration metric: took 168.216709ms to configureAuth
	I0927 10:39:50.295524    5160 buildroot.go:189] setting minikube options for container-runtime
	I0927 10:39:50.295618    5160 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:39:50.295659    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.295739    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.295744    5160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 10:39:50.354153    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0927 10:39:50.354162    5160 buildroot.go:70] root file system type: tmpfs
	I0927 10:39:50.354215    5160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 10:39:50.354262    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.354364    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.354398    5160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 10:39:50.417860    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 10:39:50.417928    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.418047    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.418057    5160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 10:39:50.755109    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0927 10:39:50.755126    5160 machine.go:96] duration metric: took 852.846458ms to provisionDockerMachine
	I0927 10:39:50.755133    5160 start.go:293] postStartSetup for "stopped-upgrade-862000" (driver="qemu2")
	I0927 10:39:50.755140    5160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 10:39:50.755199    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 10:39:50.755208    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:39:50.787501    5160 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 10:39:50.788728    5160 info.go:137] Remote host: Buildroot 2021.02.12
	I0927 10:39:50.788737    5160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/addons for local assets ...
	I0927 10:39:50.788810    5160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19712-1508/.minikube/files for local assets ...
	I0927 10:39:50.788907    5160 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem -> 20392.pem in /etc/ssl/certs
	I0927 10:39:50.789008    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 10:39:50.791941    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:39:50.799023    5160 start.go:296] duration metric: took 43.886084ms for postStartSetup
	I0927 10:39:50.799036    5160 fix.go:56] duration metric: took 20.547150292s for fixHost
	I0927 10:39:50.799101    5160 main.go:141] libmachine: Using SSH client type: native
	I0927 10:39:50.799207    5160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044bdc00] 0x1044c0440 <nil>  [] 0s} localhost 50492 <nil> <nil>}
	I0927 10:39:50.799212    5160 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 10:39:50.856710    5160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458790.778619587
	
	I0927 10:39:50.856720    5160 fix.go:216] guest clock: 1727458790.778619587
	I0927 10:39:50.856724    5160 fix.go:229] Guest: 2024-09-27 10:39:50.778619587 -0700 PDT Remote: 2024-09-27 10:39:50.799038 -0700 PDT m=+20.666312543 (delta=-20.418413ms)
	I0927 10:39:50.856738    5160 fix.go:200] guest clock delta is within tolerance: -20.418413ms
	I0927 10:39:50.856751    5160 start.go:83] releasing machines lock for "stopped-upgrade-862000", held for 20.604873625s
	I0927 10:39:50.856829    5160 ssh_runner.go:195] Run: cat /version.json
	I0927 10:39:50.856843    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:39:50.857511    5160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 10:39:50.857533    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	W0927 10:39:50.888335    5160 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0927 10:39:50.888383    5160 ssh_runner.go:195] Run: systemctl --version
	I0927 10:39:50.931088    5160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 10:39:50.933021    5160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 10:39:50.933071    5160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0927 10:39:50.936639    5160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0927 10:39:50.942971    5160 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 10:39:50.942981    5160 start.go:495] detecting cgroup driver to use...
	I0927 10:39:50.943068    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:39:50.949728    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0927 10:39:50.952792    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 10:39:50.956106    5160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 10:39:50.956136    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 10:39:50.959597    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:39:50.962645    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 10:39:50.965501    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 10:39:50.968590    5160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 10:39:50.971544    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 10:39:50.974687    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 10:39:50.977533    5160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 10:39:50.980747    5160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 10:39:50.983951    5160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 10:39:50.986771    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:51.049952    5160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 10:39:51.058253    5160 start.go:495] detecting cgroup driver to use...
	I0927 10:39:51.058333    5160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 10:39:51.064646    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:39:51.069630    5160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 10:39:51.075444    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 10:39:51.080413    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 10:39:51.085286    5160 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 10:39:51.148221    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 10:39:51.153344    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 10:39:51.158567    5160 ssh_runner.go:195] Run: which cri-dockerd
	I0927 10:39:51.159891    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 10:39:51.162693    5160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0927 10:39:51.167709    5160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 10:39:51.230544    5160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 10:39:51.299081    5160 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 10:39:51.299152    5160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 10:39:51.304294    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:51.365654    5160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 10:39:52.497853    5160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.132199666s)
	I0927 10:39:52.497919    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 10:39:52.502811    5160 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0927 10:39:52.512332    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:39:52.517348    5160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 10:39:52.577459    5160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 10:39:52.637991    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:52.697227    5160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 10:39:52.703184    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 10:39:52.707413    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:52.777295    5160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 10:39:52.816833    5160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 10:39:52.816942    5160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 10:39:52.819272    5160 start.go:563] Will wait 60s for crictl version
	I0927 10:39:52.819335    5160 ssh_runner.go:195] Run: which crictl
	I0927 10:39:52.820630    5160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 10:39:52.835587    5160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0927 10:39:52.835667    5160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:39:52.851954    5160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 10:39:52.872547    5160 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0927 10:39:52.872625    5160 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0927 10:39:52.874105    5160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 10:39:52.877591    5160 kubeadm.go:883] updating cluster {Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0927 10:39:52.877635    5160 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0927 10:39:52.877685    5160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:39:52.888068    5160 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 10:39:52.888077    5160 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0927 10:39:52.888131    5160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 10:39:52.891619    5160 ssh_runner.go:195] Run: which lz4
	I0927 10:39:52.892841    5160 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 10:39:52.894204    5160 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 10:39:52.894214    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0927 10:39:53.817850    5160 docker.go:649] duration metric: took 925.076875ms to copy over tarball
	I0927 10:39:53.817915    5160 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 10:39:54.968406    5160 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.150503583s)
	I0927 10:39:54.968418    5160 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 10:39:54.984320    5160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 10:39:54.987824    5160 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0927 10:39:54.993074    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:55.053382    5160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 10:39:56.662239    5160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.608877125s)
	I0927 10:39:56.662355    5160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 10:39:56.679242    5160 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 10:39:56.679253    5160 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0927 10:39:56.679258    5160 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 10:39:56.684616    5160 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:56.686731    5160 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:56.688733    5160 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:56.688988    5160 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:56.690490    5160 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:56.690494    5160 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:56.691916    5160 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:56.692068    5160 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:56.693110    5160 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:56.693131    5160 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:56.694019    5160 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0927 10:39:56.694258    5160 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:56.695337    5160 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:56.695659    5160 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:56.696690    5160 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0927 10:39:56.697360    5160 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.113954    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:57.124490    5160 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0927 10:39:57.124519    5160 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:57.124584    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0927 10:39:57.135200    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0927 10:39:57.137557    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:57.139523    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:57.143981    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:57.151839    5160 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0927 10:39:57.151868    5160 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:57.151936    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0927 10:39:57.157621    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:57.158396    5160 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0927 10:39:57.158412    5160 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:57.158448    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0927 10:39:57.160962    5160 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0927 10:39:57.160980    5160 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:57.161038    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0927 10:39:57.169371    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0927 10:39:57.178005    5160 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0927 10:39:57.178026    5160 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:57.178092    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0927 10:39:57.181894    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0927 10:39:57.187009    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0927 10:39:57.187560    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0927 10:39:57.199396    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0927 10:39:57.199405    5160 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0927 10:39:57.199512    5160 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0927 10:39:57.199541    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0927 10:39:57.199594    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0927 10:39:57.201066    5160 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0927 10:39:57.201352    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.222631    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0927 10:39:57.222674    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0927 10:39:57.222688    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0927 10:39:57.222702    5160 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0927 10:39:57.222721    5160 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.222745    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0927 10:39:57.222761    5160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0927 10:39:57.258240    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0927 10:39:57.258269    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0927 10:39:57.261746    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0927 10:39:57.261882    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0927 10:39:57.276903    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0927 10:39:57.276931    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0927 10:39:57.282496    5160 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0927 10:39:57.282584    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0927 10:39:57.340325    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0927 10:39:57.383348    5160 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0927 10:39:57.383388    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0927 10:39:57.486182    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0927 10:39:57.548920    5160 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0927 10:39:57.548936    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0927 10:39:57.562416    5160 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0927 10:39:57.562544    5160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:57.696301    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0927 10:39:57.696328    5160 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0927 10:39:57.696348    5160 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:57.696420    5160 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:39:57.710341    5160 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0927 10:39:57.710479    5160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0927 10:39:57.711808    5160 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0927 10:39:57.711822    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0927 10:39:57.744661    5160 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0927 10:39:57.744675    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0927 10:39:57.973868    5160 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0927 10:39:57.973912    5160 cache_images.go:92] duration metric: took 1.294681167s to LoadCachedImages
	W0927 10:39:57.973964    5160 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0927 10:39:57.973971    5160 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0927 10:39:57.974028    5160 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-862000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 10:39:57.974109    5160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 10:39:57.989692    5160 cni.go:84] Creating CNI manager for ""
	I0927 10:39:57.989712    5160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:39:57.989721    5160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 10:39:57.989730    5160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-862000 NodeName:stopped-upgrade-862000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 10:39:57.989798    5160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-862000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 10:39:57.989871    5160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0927 10:39:57.992709    5160 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 10:39:57.992739    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 10:39:57.995758    5160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0927 10:39:58.000874    5160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 10:39:58.006020    5160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0927 10:39:58.011309    5160 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0927 10:39:58.012621    5160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 10:39:58.016397    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:39:58.077626    5160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:39:58.087189    5160 certs.go:68] Setting up /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000 for IP: 10.0.2.15
	I0927 10:39:58.087202    5160 certs.go:194] generating shared ca certs ...
	I0927 10:39:58.087212    5160 certs.go:226] acquiring lock for ca certs: {Name:mk0418f7d8f4c252d010b1c431fe702739668245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.087388    5160 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key
	I0927 10:39:58.087436    5160 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key
	I0927 10:39:58.087441    5160 certs.go:256] generating profile certs ...
	I0927 10:39:58.087543    5160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.key
	I0927 10:39:58.087561    5160 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8
	I0927 10:39:58.087575    5160 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0927 10:39:58.157681    5160 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8 ...
	I0927 10:39:58.157697    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8: {Name:mk3b014ac82695a7784b900ea0e78c3f91e3ea04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.158131    5160 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8 ...
	I0927 10:39:58.158142    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8: {Name:mk2b182db26c53a67f044097c0f6ad9062ad4010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.158308    5160 certs.go:381] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt.382452b8 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt
	I0927 10:39:58.158461    5160 certs.go:385] copying /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key.382452b8 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key
	I0927 10:39:58.158616    5160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/proxy-client.key
	I0927 10:39:58.158754    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem (1338 bytes)
	W0927 10:39:58.158782    5160 certs.go:480] ignoring /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039_empty.pem, impossibly tiny 0 bytes
	I0927 10:39:58.158787    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 10:39:58.158817    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem (1078 bytes)
	I0927 10:39:58.158842    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem (1123 bytes)
	I0927 10:39:58.158867    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/key.pem (1679 bytes)
	I0927 10:39:58.158918    5160 certs.go:484] found cert: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem (1708 bytes)
	I0927 10:39:58.159292    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 10:39:58.166365    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 10:39:58.172665    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 10:39:58.180051    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 10:39:58.187645    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 10:39:58.194765    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 10:39:58.201273    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 10:39:58.208236    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 10:39:58.215568    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/2039.pem --> /usr/share/ca-certificates/2039.pem (1338 bytes)
	I0927 10:39:58.222649    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/ssl/certs/20392.pem --> /usr/share/ca-certificates/20392.pem (1708 bytes)
	I0927 10:39:58.229153    5160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 10:39:58.235889    5160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 10:39:58.240889    5160 ssh_runner.go:195] Run: openssl version
	I0927 10:39:58.242708    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 10:39:58.245532    5160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:39:58.246808    5160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:39:58.246831    5160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 10:39:58.248483    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 10:39:58.251805    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2039.pem && ln -fs /usr/share/ca-certificates/2039.pem /etc/ssl/certs/2039.pem"
	I0927 10:39:58.254785    5160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2039.pem
	I0927 10:39:58.256122    5160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:11 /usr/share/ca-certificates/2039.pem
	I0927 10:39:58.256148    5160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2039.pem
	I0927 10:39:58.257912    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2039.pem /etc/ssl/certs/51391683.0"
	I0927 10:39:58.260784    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20392.pem && ln -fs /usr/share/ca-certificates/20392.pem /etc/ssl/certs/20392.pem"
	I0927 10:39:58.264281    5160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20392.pem
	I0927 10:39:58.265612    5160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:11 /usr/share/ca-certificates/20392.pem
	I0927 10:39:58.265639    5160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20392.pem
	I0927 10:39:58.267287    5160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20392.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 10:39:58.270086    5160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 10:39:58.271437    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 10:39:58.273342    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 10:39:58.275215    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 10:39:58.277239    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 10:39:58.279046    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 10:39:58.280833    5160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 10:39:58.282588    5160 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50526 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0927 10:39:58.282664    5160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:39:58.292481    5160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 10:39:58.295951    5160 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 10:39:58.295963    5160 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 10:39:58.295995    5160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 10:39:58.301768    5160 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 10:39:58.302080    5160 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-862000" does not appear in /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:39:58.302185    5160 kubeconfig.go:62] /Users/jenkins/minikube-integration/19712-1508/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-862000" cluster setting kubeconfig missing "stopped-upgrade-862000" context setting]
	I0927 10:39:58.302387    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:39:58.302841    5160 kapi.go:59] client config for stopped-upgrade-862000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.key", CAFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a965d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 10:39:58.303170    5160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 10:39:58.305902    5160 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-862000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0927 10:39:58.305907    5160 kubeadm.go:1160] stopping kube-system containers ...
	I0927 10:39:58.305954    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 10:39:58.316356    5160 docker.go:483] Stopping containers: [da497851937b 120cb3756aba 9e8db25c44dd 35682614f5ee f305d112a88e d3e7db455b14 726712748f0b e6b2ac509287]
	I0927 10:39:58.316429    5160 ssh_runner.go:195] Run: docker stop da497851937b 120cb3756aba 9e8db25c44dd 35682614f5ee f305d112a88e d3e7db455b14 726712748f0b e6b2ac509287
	I0927 10:39:58.326937    5160 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 10:39:58.332663    5160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:39:58.335821    5160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 10:39:58.335826    5160 kubeadm.go:157] found existing configuration files:
	
	I0927 10:39:58.335848    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0927 10:39:58.338752    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 10:39:58.338777    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 10:39:58.341265    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0927 10:39:58.344037    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 10:39:58.344060    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 10:39:58.347082    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0927 10:39:58.349583    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 10:39:58.349608    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:39:58.352515    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0927 10:39:58.355680    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 10:39:58.355706    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:39:58.358465    5160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:39:58.360985    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.382306    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.654989    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.767542    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.793992    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 10:39:58.819520    5160 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:39:58.819612    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:39:59.321656    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:39:59.821487    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:39:59.826459    5160 api_server.go:72] duration metric: took 1.0069655s to wait for apiserver process to appear ...
	I0927 10:39:59.826469    5160 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:39:59.826487    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:04.828309    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:04.828372    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:09.828483    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:09.828530    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:14.828839    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:14.828886    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:19.829248    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:19.829294    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:24.829849    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:24.829946    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:29.830961    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:29.831009    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:34.832497    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:34.832548    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:39.834164    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:39.834210    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:44.836124    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:44.836141    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:49.838168    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:49.838190    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:54.840448    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:54.840572    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:40:59.843237    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:40:59.843733    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:40:59.890393    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:40:59.890540    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:40:59.908375    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:40:59.908484    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:40:59.921846    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:40:59.921926    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:40:59.933520    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:40:59.933606    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:40:59.944494    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:40:59.944576    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:40:59.955568    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:40:59.955661    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:40:59.966375    5160 logs.go:276] 0 containers: []
	W0927 10:40:59.966386    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:40:59.966461    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:40:59.976413    5160 logs.go:276] 0 containers: []
	W0927 10:40:59.976426    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:40:59.976437    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:40:59.976443    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:00.014099    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:00.014111    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:00.027602    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:00.027611    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:00.054127    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:00.054137    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:00.069459    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:00.069471    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:00.080930    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:00.080943    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:00.095173    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:00.095182    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:00.106440    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:00.106450    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:00.118883    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:00.118893    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:00.131983    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:00.131997    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:00.209852    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:00.209866    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:00.223557    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:00.223570    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:00.238688    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:00.238696    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:00.243496    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:00.243505    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:00.260780    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:00.260791    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:02.786289    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:07.788501    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:07.788740    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:07.806155    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:07.806265    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:07.819542    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:07.819631    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:07.830428    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:07.830513    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:07.842110    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:07.842188    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:07.852689    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:07.852768    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:07.864019    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:07.864091    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:07.874039    5160 logs.go:276] 0 containers: []
	W0927 10:41:07.874051    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:07.874120    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:07.884563    5160 logs.go:276] 0 containers: []
	W0927 10:41:07.884576    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:07.884585    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:07.884591    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:07.898875    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:07.898886    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:07.912712    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:07.912721    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:07.926946    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:07.926959    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:07.939299    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:07.939308    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:07.964703    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:07.964713    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:08.003463    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:08.003473    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:08.037548    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:08.037559    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:08.052312    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:08.052320    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:08.064241    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:08.064253    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:08.075789    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:08.075799    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:08.089782    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:08.089790    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:08.128288    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:08.128298    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:08.140343    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:08.140360    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:08.159358    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:08.159373    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:10.665989    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:15.667857    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:15.668124    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:15.684845    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:15.684957    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:15.698406    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:15.698500    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:15.709739    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:15.709816    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:15.720668    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:15.720752    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:15.734391    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:15.734479    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:15.745216    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:15.745299    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:15.755499    5160 logs.go:276] 0 containers: []
	W0927 10:41:15.755511    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:15.755584    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:15.766055    5160 logs.go:276] 0 containers: []
	W0927 10:41:15.766065    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:15.766072    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:15.766079    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:15.804976    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:15.804988    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:15.809697    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:15.809706    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:15.823429    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:15.823441    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:15.837785    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:15.837801    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:15.852778    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:15.852787    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:15.869945    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:15.869956    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:15.881392    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:15.881404    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:15.919585    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:15.919596    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:15.944812    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:15.944823    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:15.956790    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:15.956800    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:15.980870    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:15.980878    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:15.995368    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:15.995376    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:16.006976    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:16.006987    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:16.021902    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:16.021913    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:18.536605    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:23.538761    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:23.538969    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:23.552497    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:23.552595    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:23.563800    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:23.563885    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:23.574028    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:23.574116    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:23.584307    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:23.584385    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:23.594925    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:23.595007    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:23.605319    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:23.605394    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:23.615808    5160 logs.go:276] 0 containers: []
	W0927 10:41:23.615819    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:23.615891    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:23.626221    5160 logs.go:276] 0 containers: []
	W0927 10:41:23.626233    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:23.626242    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:23.626248    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:23.639881    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:23.639890    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:23.651499    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:23.651508    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:23.663524    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:23.663535    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:23.700543    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:23.700558    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:23.714962    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:23.714971    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:23.728674    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:23.728683    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:23.752542    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:23.752553    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:23.756887    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:23.756894    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:23.768399    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:23.768414    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:23.786513    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:23.786527    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:23.800045    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:23.800058    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:23.817104    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:23.817113    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:23.851954    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:23.851967    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:23.877336    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:23.877347    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:26.393919    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:31.395321    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:31.395513    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:31.406830    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:31.406904    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:31.417502    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:31.417574    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:31.427950    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:31.428033    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:31.438829    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:31.438916    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:31.449943    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:31.450034    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:31.466910    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:31.466987    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:31.478724    5160 logs.go:276] 0 containers: []
	W0927 10:41:31.478737    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:31.478806    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:31.493611    5160 logs.go:276] 0 containers: []
	W0927 10:41:31.493622    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:31.493630    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:31.493636    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:31.497906    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:31.497914    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:31.509522    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:31.509532    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:31.534659    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:31.534667    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:31.546811    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:31.546820    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:31.585909    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:31.585917    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:31.610111    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:31.610123    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:31.624304    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:31.624315    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:31.639205    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:31.639218    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:31.673015    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:31.673031    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:31.688266    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:31.688276    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:31.701707    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:31.701717    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:31.713353    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:31.713363    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:31.731047    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:31.731057    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:31.743757    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:31.743768    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:34.258938    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:39.261190    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:39.261477    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:39.282367    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:39.282487    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:39.296815    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:39.296904    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:39.308764    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:39.308848    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:39.319423    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:39.319514    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:39.330042    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:39.330121    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:39.340600    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:39.340682    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:39.350689    5160 logs.go:276] 0 containers: []
	W0927 10:41:39.350701    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:39.350771    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:39.362260    5160 logs.go:276] 0 containers: []
	W0927 10:41:39.362272    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:39.362279    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:39.362285    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:39.366950    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:39.366958    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:39.390438    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:39.390454    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:39.408374    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:39.408387    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:39.422770    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:39.422785    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:39.447426    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:39.447438    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:39.465535    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:39.465550    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:39.481188    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:39.481200    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:39.492930    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:39.492942    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:39.511023    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:39.511034    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:39.550232    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:39.550246    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:39.568009    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:39.568023    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:39.585965    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:39.585978    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:39.598104    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:39.598117    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:39.632822    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:39.632835    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:42.149671    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:47.152237    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:47.152688    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:47.191355    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:47.191549    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:47.211302    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:47.211419    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:47.229117    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:47.229209    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:47.241154    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:47.241239    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:47.251964    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:47.252044    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:47.262907    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:47.262995    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:47.274218    5160 logs.go:276] 0 containers: []
	W0927 10:41:47.274228    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:47.274298    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:47.285292    5160 logs.go:276] 0 containers: []
	W0927 10:41:47.285304    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:47.285312    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:47.285317    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:47.297070    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:47.297080    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:47.314625    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:47.314634    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:47.335489    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:47.335499    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:47.375364    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:47.375371    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:47.400545    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:47.400557    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:47.415826    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:47.415842    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:47.430111    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:47.430120    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:47.448723    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:47.448736    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:47.452718    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:47.452724    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:47.475456    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:47.475469    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:47.486946    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:47.486957    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:47.501379    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:47.501390    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:47.528242    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:47.528252    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:47.584693    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:47.584704    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:50.098974    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:41:55.101631    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:41:55.102051    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:41:55.132680    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:41:55.132836    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:41:55.151304    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:41:55.151407    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:41:55.166505    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:41:55.166592    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:41:55.178382    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:41:55.178464    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:41:55.188967    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:41:55.189034    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:41:55.199609    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:41:55.199676    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:41:55.209957    5160 logs.go:276] 0 containers: []
	W0927 10:41:55.209968    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:41:55.210025    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:41:55.220180    5160 logs.go:276] 0 containers: []
	W0927 10:41:55.220191    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:41:55.220198    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:41:55.220204    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:41:55.259177    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:41:55.259189    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:41:55.283559    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:41:55.283567    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:41:55.287649    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:41:55.287655    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:41:55.322177    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:41:55.322188    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:41:55.336505    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:41:55.336515    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:41:55.351788    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:41:55.351797    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:41:55.366205    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:41:55.366215    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:41:55.380103    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:41:55.380112    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:41:55.394029    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:41:55.394040    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:41:55.405366    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:41:55.405376    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:41:55.422330    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:41:55.422340    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:41:55.446905    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:41:55.446915    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:41:55.459137    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:41:55.459147    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:41:55.472995    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:41:55.473004    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:41:57.988316    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:02.990857    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:02.991096    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:03.007709    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:03.007812    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:03.021191    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:03.021268    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:03.031970    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:03.032058    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:03.042507    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:03.042590    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:03.053167    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:03.053263    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:03.063875    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:03.063957    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:03.092748    5160 logs.go:276] 0 containers: []
	W0927 10:42:03.092762    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:03.092832    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:03.110496    5160 logs.go:276] 0 containers: []
	W0927 10:42:03.110508    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:03.110518    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:03.110524    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:03.148951    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:03.148962    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:03.163151    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:03.163164    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:03.177998    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:03.178012    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:03.189823    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:03.189836    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:03.206450    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:03.206463    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:03.218390    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:03.218403    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:03.223077    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:03.223083    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:03.238523    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:03.238536    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:03.250394    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:03.250405    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:03.275217    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:03.275232    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:03.319125    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:03.319135    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:03.334226    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:03.334236    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:03.358796    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:03.358806    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:03.370349    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:03.370364    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:05.893059    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:10.895266    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:10.895449    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:10.908899    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:10.908995    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:10.920194    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:10.920280    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:10.930301    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:10.930396    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:10.941091    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:10.941174    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:10.951400    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:10.951479    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:10.962734    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:10.962808    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:10.973058    5160 logs.go:276] 0 containers: []
	W0927 10:42:10.973069    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:10.973141    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:10.984591    5160 logs.go:276] 0 containers: []
	W0927 10:42:10.984603    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:10.984611    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:10.984617    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:10.998826    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:10.998836    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:11.022134    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:11.022143    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:11.026276    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:11.026285    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:11.040137    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:11.040147    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:11.053837    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:11.053847    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:11.068632    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:11.068642    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:11.086345    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:11.086355    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:11.098645    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:11.098656    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:11.139015    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:11.139026    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:11.153050    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:11.153060    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:11.171787    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:11.171799    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:11.183427    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:11.183437    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:11.196499    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:11.196508    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:11.230860    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:11.230872    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:13.758496    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:18.760989    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:18.761133    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:18.773627    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:18.773718    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:18.784264    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:18.784350    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:18.801936    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:18.802022    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:18.812841    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:18.812930    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:18.823474    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:18.823557    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:18.834053    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:18.834138    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:18.844080    5160 logs.go:276] 0 containers: []
	W0927 10:42:18.844094    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:18.844162    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:18.856617    5160 logs.go:276] 0 containers: []
	W0927 10:42:18.856633    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:18.856642    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:18.856649    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:18.868905    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:18.868915    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:18.886403    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:18.886413    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:18.898341    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:18.898357    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:18.935239    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:18.935246    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:18.952917    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:18.952926    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:18.969518    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:18.969530    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:18.981454    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:18.981464    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:18.985391    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:18.985397    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:18.999430    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:18.999441    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:19.011719    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:19.011730    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:19.032602    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:19.032612    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:19.057834    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:19.057846    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:19.072968    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:19.072978    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:19.095642    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:19.095651    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:21.632934    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:26.635226    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:26.635456    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:26.653016    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:26.653118    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:26.666126    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:26.666219    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:26.677798    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:26.677881    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:26.688167    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:26.688258    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:26.698531    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:26.698615    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:26.709176    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:26.709258    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:26.719237    5160 logs.go:276] 0 containers: []
	W0927 10:42:26.719249    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:26.719317    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:26.731288    5160 logs.go:276] 0 containers: []
	W0927 10:42:26.731302    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:26.731311    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:26.731316    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:26.745804    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:26.745815    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:26.760585    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:26.760596    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:26.775661    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:26.775672    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:26.814996    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:26.815004    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:26.849483    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:26.849494    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:26.861337    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:26.861351    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:26.865367    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:26.865375    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:26.879151    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:26.879162    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:26.896104    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:26.896115    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:26.920258    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:26.920265    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:26.944040    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:26.944051    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:26.961561    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:26.961570    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:26.974435    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:26.974445    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:26.987069    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:26.987079    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:29.501312    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:34.503360    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:34.503654    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:34.529504    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:34.529642    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:34.545734    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:34.545832    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:34.558764    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:34.558852    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:34.570313    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:34.570396    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:34.580862    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:34.580939    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:34.591559    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:34.591633    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:34.601890    5160 logs.go:276] 0 containers: []
	W0927 10:42:34.601902    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:34.601966    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:34.612320    5160 logs.go:276] 0 containers: []
	W0927 10:42:34.612333    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:34.612341    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:34.612346    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:34.635183    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:34.635190    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:34.671445    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:34.671456    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:34.685648    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:34.685658    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:34.703679    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:34.703689    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:34.717870    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:34.717881    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:34.736349    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:34.736360    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:34.775508    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:34.775519    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:34.780493    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:34.780505    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:34.793181    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:34.793192    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:34.808132    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:34.808144    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:34.823190    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:34.823206    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:34.834891    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:34.834904    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:34.846692    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:34.846707    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:34.872420    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:34.872430    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:37.388117    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:42.390733    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:42.391032    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:42.416125    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:42.416265    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:42.433192    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:42.433299    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:42.446836    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:42.446925    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:42.458255    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:42.458333    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:42.470601    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:42.470685    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:42.481622    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:42.481702    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:42.496605    5160 logs.go:276] 0 containers: []
	W0927 10:42:42.496617    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:42.496692    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:42.507301    5160 logs.go:276] 0 containers: []
	W0927 10:42:42.507312    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:42.507319    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:42.507324    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:42.520954    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:42.520965    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:42.535272    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:42.535283    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:42.547754    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:42.547764    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:42.562579    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:42.562595    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:42.580944    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:42.580955    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:42.592158    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:42.592174    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:42.617720    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:42.617731    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:42.631570    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:42.631584    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:42.645016    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:42.645026    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:42.657545    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:42.657555    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:42.682090    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:42.682100    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:42.720247    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:42.720255    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:42.724332    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:42.724340    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:42.759285    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:42.759296    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:45.278243    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:50.278938    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:50.279315    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:50.309211    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:50.309371    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:50.328059    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:50.328154    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:50.341806    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:50.341898    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:50.357210    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:50.357310    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:50.367589    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:50.367666    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:50.378376    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:50.378464    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:50.388393    5160 logs.go:276] 0 containers: []
	W0927 10:42:50.388405    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:50.388477    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:50.398883    5160 logs.go:276] 0 containers: []
	W0927 10:42:50.398899    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:50.398906    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:50.398912    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:50.403598    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:50.403604    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:50.415186    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:50.415196    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:42:50.427951    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:50.427962    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:50.442223    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:50.442234    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:50.456901    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:50.456911    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:50.468086    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:50.468096    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:50.485171    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:50.485182    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:50.510315    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:50.510326    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:50.524668    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:50.524678    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:50.542655    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:50.542666    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:50.565329    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:50.565336    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:50.602554    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:50.602564    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:50.644210    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:50.644223    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:50.659173    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:50.659183    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:53.174171    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:42:58.176287    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:42:58.176486    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:42:58.190555    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:42:58.190639    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:42:58.202154    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:42:58.202232    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:42:58.213117    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:42:58.213206    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:42:58.226821    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:42:58.226904    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:42:58.241396    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:42:58.241474    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:42:58.254718    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:42:58.254806    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:42:58.268867    5160 logs.go:276] 0 containers: []
	W0927 10:42:58.268881    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:42:58.268952    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:42:58.279278    5160 logs.go:276] 0 containers: []
	W0927 10:42:58.279290    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:42:58.279299    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:42:58.279305    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:42:58.293774    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:42:58.293787    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:42:58.305515    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:42:58.305525    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:42:58.310100    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:42:58.310109    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:42:58.351900    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:42:58.351912    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:42:58.366570    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:42:58.366583    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:42:58.383952    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:42:58.383962    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:42:58.399599    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:42:58.399614    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:42:58.424384    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:42:58.424394    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:42:58.440300    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:42:58.440315    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:42:58.463241    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:42:58.463249    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:42:58.475281    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:42:58.475293    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:42:58.512763    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:42:58.512772    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:42:58.527596    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:42:58.527609    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:42:58.538941    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:42:58.538951    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:01.053188    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:06.055457    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:06.055589    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:06.069521    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:06.069616    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:06.080690    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:06.080773    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:06.091179    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:06.091263    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:06.102172    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:06.102248    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:06.112692    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:06.112780    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:06.123662    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:06.123744    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:06.133867    5160 logs.go:276] 0 containers: []
	W0927 10:43:06.133879    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:06.133947    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:06.143733    5160 logs.go:276] 0 containers: []
	W0927 10:43:06.143742    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:06.143750    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:06.143755    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:06.157686    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:06.157696    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:06.169865    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:06.169876    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:06.189796    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:06.189806    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:06.213963    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:06.213970    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:06.252418    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:06.252426    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:06.287354    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:06.287365    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:06.302624    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:06.302634    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:06.317139    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:06.317150    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:06.329218    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:06.329227    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:06.333645    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:06.333653    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:06.347740    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:06.347750    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:06.373669    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:06.373678    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:06.385036    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:06.385046    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:06.399578    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:06.399589    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:08.912988    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:13.915098    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:13.915303    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:13.929694    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:13.929793    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:13.942027    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:13.942110    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:13.952340    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:13.952426    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:13.962750    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:13.962835    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:13.973386    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:13.973458    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:13.983968    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:13.984035    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:13.993854    5160 logs.go:276] 0 containers: []
	W0927 10:43:13.993863    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:13.993921    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:14.003754    5160 logs.go:276] 0 containers: []
	W0927 10:43:14.003765    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:14.003773    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:14.003779    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:14.042920    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:14.042928    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:14.057505    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:14.057515    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:14.070983    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:14.070994    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:14.088602    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:14.088611    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:14.106099    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:14.106111    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:14.119914    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:14.119924    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:14.138731    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:14.138741    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:14.155312    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:14.155322    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:14.168262    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:14.168278    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:14.172302    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:14.172310    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:14.206745    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:14.206760    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:14.231535    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:14.231545    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:14.243854    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:14.243864    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:14.265900    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:14.265907    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:16.779614    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:21.780616    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:21.780865    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:21.804640    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:21.804765    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:21.820420    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:21.820518    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:21.833014    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:21.833098    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:21.844095    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:21.844179    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:21.854858    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:21.854942    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:21.865950    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:21.866027    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:21.876659    5160 logs.go:276] 0 containers: []
	W0927 10:43:21.876672    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:21.876734    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:21.886441    5160 logs.go:276] 0 containers: []
	W0927 10:43:21.886454    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:21.886461    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:21.886467    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:21.908956    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:21.908963    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:21.913045    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:21.913052    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:21.938078    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:21.938088    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:21.954819    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:21.954830    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:21.971930    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:21.971940    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:22.007690    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:22.007699    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:22.018963    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:22.018973    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:22.030651    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:22.030661    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:22.069848    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:22.069858    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:22.087082    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:22.087095    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:22.099698    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:22.099708    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:22.117149    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:22.117167    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:22.145215    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:22.145232    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:22.168050    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:22.168061    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:24.681509    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:29.683722    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:29.683920    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:29.700017    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:29.700123    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:29.713254    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:29.713330    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:29.724794    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:29.724866    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:29.735238    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:29.735324    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:29.746186    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:29.746270    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:29.757502    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:29.757580    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:29.768013    5160 logs.go:276] 0 containers: []
	W0927 10:43:29.768023    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:29.768087    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:29.777770    5160 logs.go:276] 0 containers: []
	W0927 10:43:29.777779    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:29.777786    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:29.777791    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:29.793068    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:29.793083    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:29.807532    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:29.807542    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:29.822156    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:29.822165    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:29.845016    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:29.845026    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:29.849592    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:29.849602    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:29.884748    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:29.884758    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:29.909431    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:29.909446    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:29.923480    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:29.923491    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:29.934891    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:29.934905    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:29.949311    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:29.949325    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:29.961132    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:29.961145    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:29.979647    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:29.979656    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:30.017338    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:30.017349    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:30.030486    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:30.030499    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:32.542057    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:37.544417    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:37.544719    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:37.569675    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:37.569819    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:37.586911    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:37.587011    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:37.599709    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:37.599790    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:37.611402    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:37.611488    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:37.625355    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:37.625435    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:37.635900    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:37.635976    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:37.649214    5160 logs.go:276] 0 containers: []
	W0927 10:43:37.649228    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:37.649299    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:37.658946    5160 logs.go:276] 0 containers: []
	W0927 10:43:37.658956    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:37.658965    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:37.658972    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:37.677032    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:37.677046    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:37.690629    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:37.690642    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:37.708784    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:37.708796    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:37.732681    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:37.732687    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:37.746838    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:37.746853    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:37.761679    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:37.761689    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:37.774137    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:37.774148    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:37.785975    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:37.785986    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:37.790551    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:37.790560    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:37.814949    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:37.814961    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:37.829207    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:37.829215    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:37.868495    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:37.868506    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:37.902858    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:37.902869    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:37.914379    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:37.914390    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:40.427813    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:45.429954    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:45.430149    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:45.442613    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:45.442699    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:45.453335    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:45.453422    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:45.464015    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:45.464102    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:45.475581    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:45.475659    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:45.486190    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:45.486275    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:45.496990    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:45.497068    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:45.507214    5160 logs.go:276] 0 containers: []
	W0927 10:43:45.507226    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:45.507294    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:45.517821    5160 logs.go:276] 0 containers: []
	W0927 10:43:45.517832    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:45.517841    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:45.517846    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:45.522344    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:45.522372    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:45.555543    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:45.555554    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:45.570027    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:45.570039    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:45.593782    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:45.593792    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:45.605856    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:45.605866    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:45.644609    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:45.644620    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:45.658823    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:45.658835    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:45.670871    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:45.670882    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:45.684958    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:45.684971    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:45.699098    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:45.699110    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:45.710558    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:45.710568    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:45.725172    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:45.725183    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:45.748771    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:45.748782    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:45.771571    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:45.771584    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:48.285650    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:43:53.288288    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:43:53.288823    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:43:53.330715    5160 logs.go:276] 2 containers: [021632da64ae 9e8db25c44dd]
	I0927 10:43:53.330861    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:43:53.353013    5160 logs.go:276] 2 containers: [252e9947f2ea f305d112a88e]
	I0927 10:43:53.353108    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:43:53.367763    5160 logs.go:276] 1 containers: [348a1d50ee96]
	I0927 10:43:53.367853    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:43:53.380041    5160 logs.go:276] 2 containers: [9a8675225acd 35682614f5ee]
	I0927 10:43:53.380130    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:43:53.390825    5160 logs.go:276] 1 containers: [4e635867a2e5]
	I0927 10:43:53.390901    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:43:53.401349    5160 logs.go:276] 2 containers: [6fdfe084ab6a da497851937b]
	I0927 10:43:53.401434    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:43:53.411074    5160 logs.go:276] 0 containers: []
	W0927 10:43:53.411086    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:43:53.411156    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:43:53.422095    5160 logs.go:276] 0 containers: []
	W0927 10:43:53.422109    5160 logs.go:278] No container was found matching "storage-provisioner"
	I0927 10:43:53.422118    5160 logs.go:123] Gathering logs for kube-apiserver [9e8db25c44dd] ...
	I0927 10:43:53.422125    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e8db25c44dd"
	I0927 10:43:53.446753    5160 logs.go:123] Gathering logs for kube-scheduler [35682614f5ee] ...
	I0927 10:43:53.446767    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35682614f5ee"
	I0927 10:43:53.462062    5160 logs.go:123] Gathering logs for kube-proxy [4e635867a2e5] ...
	I0927 10:43:53.462077    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e635867a2e5"
	I0927 10:43:53.474729    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:43:53.474738    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:43:53.479023    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:43:53.479029    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:43:53.512920    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:43:53.512930    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:43:53.524827    5160 logs.go:123] Gathering logs for kube-controller-manager [6fdfe084ab6a] ...
	I0927 10:43:53.524838    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fdfe084ab6a"
	I0927 10:43:53.542326    5160 logs.go:123] Gathering logs for kube-controller-manager [da497851937b] ...
	I0927 10:43:53.542339    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da497851937b"
	I0927 10:43:53.554796    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:43:53.554810    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:43:53.577031    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:43:53.577039    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:43:53.614284    5160 logs.go:123] Gathering logs for etcd [252e9947f2ea] ...
	I0927 10:43:53.614297    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 252e9947f2ea"
	I0927 10:43:53.628795    5160 logs.go:123] Gathering logs for coredns [348a1d50ee96] ...
	I0927 10:43:53.628808    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 348a1d50ee96"
	I0927 10:43:53.639974    5160 logs.go:123] Gathering logs for kube-apiserver [021632da64ae] ...
	I0927 10:43:53.639987    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 021632da64ae"
	I0927 10:43:53.658538    5160 logs.go:123] Gathering logs for etcd [f305d112a88e] ...
	I0927 10:43:53.658548    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f305d112a88e"
	I0927 10:43:53.674149    5160 logs.go:123] Gathering logs for kube-scheduler [9a8675225acd] ...
	I0927 10:43:53.674158    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a8675225acd"
	I0927 10:43:56.196283    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:01.198759    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:01.198843    5160 kubeadm.go:597] duration metric: took 4m2.909202s to restartPrimaryControlPlane
	W0927 10:44:01.198909    5160 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0927 10:44:01.198933    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0927 10:44:02.159506    5160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 10:44:02.164387    5160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 10:44:02.167462    5160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 10:44:02.170051    5160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 10:44:02.170057    5160 kubeadm.go:157] found existing configuration files:
	
	I0927 10:44:02.170087    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf
	I0927 10:44:02.172517    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 10:44:02.172543    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 10:44:02.175581    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf
	I0927 10:44:02.178041    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 10:44:02.178066    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 10:44:02.180625    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf
	I0927 10:44:02.183478    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 10:44:02.183506    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 10:44:02.186047    5160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf
	I0927 10:44:02.188706    5160 kubeadm.go:163] "https://control-plane.minikube.internal:50526" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50526 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 10:44:02.188730    5160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 10:44:02.191868    5160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 10:44:02.210333    5160 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0927 10:44:02.210362    5160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 10:44:02.255815    5160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 10:44:02.255866    5160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 10:44:02.255927    5160 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 10:44:02.306493    5160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 10:44:02.310725    5160 out.go:235]   - Generating certificates and keys ...
	I0927 10:44:02.310766    5160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 10:44:02.310798    5160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 10:44:02.310840    5160 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 10:44:02.310872    5160 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 10:44:02.310909    5160 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 10:44:02.310943    5160 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 10:44:02.310976    5160 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 10:44:02.311008    5160 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 10:44:02.311061    5160 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 10:44:02.311119    5160 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 10:44:02.311138    5160 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 10:44:02.311175    5160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 10:44:02.354928    5160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 10:44:02.508589    5160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 10:44:02.641062    5160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 10:44:02.786495    5160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 10:44:02.815148    5160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 10:44:02.815554    5160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 10:44:02.815577    5160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 10:44:02.881875    5160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 10:44:02.886121    5160 out.go:235]   - Booting up control plane ...
	I0927 10:44:02.886167    5160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 10:44:02.886202    5160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 10:44:02.886235    5160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 10:44:02.886279    5160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 10:44:02.886351    5160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 10:44:07.388480    5160 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501913 seconds
	I0927 10:44:07.388549    5160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 10:44:07.392092    5160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 10:44:07.917108    5160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 10:44:07.917455    5160 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-862000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 10:44:08.420672    5160 kubeadm.go:310] [bootstrap-token] Using token: grm2ho.lrxp2943rot0jvnk
	I0927 10:44:08.426701    5160 out.go:235]   - Configuring RBAC rules ...
	I0927 10:44:08.426762    5160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 10:44:08.426808    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 10:44:08.432240    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 10:44:08.433103    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 10:44:08.434021    5160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 10:44:08.434820    5160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 10:44:08.438180    5160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 10:44:08.607197    5160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 10:44:08.825232    5160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 10:44:08.825793    5160 kubeadm.go:310] 
	I0927 10:44:08.825822    5160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 10:44:08.825830    5160 kubeadm.go:310] 
	I0927 10:44:08.825865    5160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 10:44:08.825870    5160 kubeadm.go:310] 
	I0927 10:44:08.825892    5160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 10:44:08.825918    5160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 10:44:08.825989    5160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 10:44:08.825993    5160 kubeadm.go:310] 
	I0927 10:44:08.826019    5160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 10:44:08.826021    5160 kubeadm.go:310] 
	I0927 10:44:08.826056    5160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 10:44:08.826060    5160 kubeadm.go:310] 
	I0927 10:44:08.826087    5160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 10:44:08.826127    5160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 10:44:08.826170    5160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 10:44:08.826177    5160 kubeadm.go:310] 
	I0927 10:44:08.826214    5160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 10:44:08.826262    5160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 10:44:08.826267    5160 kubeadm.go:310] 
	I0927 10:44:08.826343    5160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token grm2ho.lrxp2943rot0jvnk \
	I0927 10:44:08.826394    5160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 \
	I0927 10:44:08.826406    5160 kubeadm.go:310] 	--control-plane 
	I0927 10:44:08.826408    5160 kubeadm.go:310] 
	I0927 10:44:08.826451    5160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 10:44:08.826453    5160 kubeadm.go:310] 
	I0927 10:44:08.826492    5160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token grm2ho.lrxp2943rot0jvnk \
	I0927 10:44:08.826542    5160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95f688fe91b88dce6b995ff3f6bae2601ecac9e72bc38ebf6a40f1df30a0f1f1 
	I0927 10:44:08.828008    5160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 10:44:08.828122    5160 cni.go:84] Creating CNI manager for ""
	I0927 10:44:08.828133    5160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:44:08.831023    5160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 10:44:08.838132    5160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 10:44:08.840928    5160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 10:44:08.845697    5160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 10:44:08.845743    5160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 10:44:08.845763    5160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-862000 minikube.k8s.io/updated_at=2024_09_27T10_44_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=stopped-upgrade-862000 minikube.k8s.io/primary=true
	I0927 10:44:08.888425    5160 ops.go:34] apiserver oom_adj: -16
	I0927 10:44:08.888440    5160 kubeadm.go:1113] duration metric: took 42.73925ms to wait for elevateKubeSystemPrivileges
	I0927 10:44:08.888446    5160 kubeadm.go:394] duration metric: took 4m10.612389042s to StartCluster
	I0927 10:44:08.888456    5160 settings.go:142] acquiring lock: {Name:mk58fc55a93399a03fb1c9ac710554db41068524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:44:08.888542    5160 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:44:08.888950    5160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/kubeconfig: {Name:mk8300c379932403020f33b54e2599e68fb2c757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:44:08.889174    5160 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:44:08.889203    5160 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 10:44:08.889280    5160 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-862000"
	I0927 10:44:08.889287    5160 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-862000"
	I0927 10:44:08.889292    5160 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-862000"
	I0927 10:44:08.889303    5160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-862000"
	I0927 10:44:08.889272    5160 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	W0927 10:44:08.889294    5160 addons.go:243] addon storage-provisioner should already be in state true
	I0927 10:44:08.889400    5160 host.go:66] Checking if "stopped-upgrade-862000" exists ...
	I0927 10:44:08.890301    5160 kapi.go:59] client config for stopped-upgrade-862000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/stopped-upgrade-862000/client.key", CAFile:"/Users/jenkins/minikube-integration/19712-1508/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a965d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 10:44:08.890441    5160 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-862000"
	W0927 10:44:08.890446    5160 addons.go:243] addon default-storageclass should already be in state true
	I0927 10:44:08.890453    5160 host.go:66] Checking if "stopped-upgrade-862000" exists ...
	I0927 10:44:08.893169    5160 out.go:177] * Verifying Kubernetes components...
	I0927 10:44:08.893503    5160 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 10:44:08.897155    5160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 10:44:08.897161    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:44:08.900940    5160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 10:44:08.904903    5160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 10:44:08.909000    5160 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:44:08.909007    5160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 10:44:08.909012    5160 sshutil.go:53] new ssh client: &{IP:localhost Port:50492 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/stopped-upgrade-862000/id_rsa Username:docker}
	I0927 10:44:08.976161    5160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 10:44:08.981602    5160 api_server.go:52] waiting for apiserver process to appear ...
	I0927 10:44:08.981650    5160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 10:44:08.985409    5160 api_server.go:72] duration metric: took 96.226708ms to wait for apiserver process to appear ...
	I0927 10:44:08.985417    5160 api_server.go:88] waiting for apiserver healthz status ...
	I0927 10:44:08.985424    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:08.998489    5160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 10:44:09.032923    5160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 10:44:09.366508    5160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 10:44:09.366521    5160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 10:44:13.987493    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:13.987596    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:18.988336    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:18.988402    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:23.988805    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:23.988827    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:28.989369    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:28.989426    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:33.990299    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:33.990356    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:38.991513    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:38.991550    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0927 10:44:39.367990    5160 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0927 10:44:39.373383    5160 out.go:177] * Enabled addons: storage-provisioner
	I0927 10:44:39.382332    5160 addons.go:510] duration metric: took 30.493922084s for enable addons: enabled=[storage-provisioner]
	I0927 10:44:43.992971    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:43.993012    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:48.994844    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:48.994874    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:53.996982    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:53.997018    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:44:58.999132    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:44:58.999174    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:04.001114    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:04.001169    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:09.002730    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:09.002902    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:09.031849    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:09.031943    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:09.050178    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:09.050262    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:09.061125    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:09.061210    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:09.071422    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:09.071502    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:09.081516    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:09.081598    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:09.091807    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:09.091890    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:09.102198    5160 logs.go:276] 0 containers: []
	W0927 10:45:09.102209    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:09.102277    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:09.112599    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:09.112613    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:09.112619    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:09.149706    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:09.149718    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:09.191086    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:09.191097    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:09.206132    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:09.206141    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:09.219658    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:09.219668    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:09.230870    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:09.230881    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:09.243663    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:09.243674    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:09.261045    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:09.261059    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:09.266290    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:09.266299    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:09.281348    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:09.281358    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:09.293355    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:09.293366    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:09.305455    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:09.305468    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:09.331449    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:09.331460    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:45:11.842574    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:16.843260    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:16.843407    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:16.857381    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:16.857482    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:16.869491    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:16.869578    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:16.885386    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:16.885480    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:16.898301    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:16.898382    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:16.909693    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:16.909779    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:16.920921    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:16.920998    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:16.931684    5160 logs.go:276] 0 containers: []
	W0927 10:45:16.931695    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:16.931763    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:16.943899    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:16.943914    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:16.943919    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:16.977633    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:16.977641    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:16.982105    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:16.982112    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:16.996908    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:16.996921    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:17.009257    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:17.009267    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:17.023732    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:17.023742    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:17.046976    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:17.046984    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:17.083028    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:17.083041    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:17.096883    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:17.096893    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:17.113000    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:17.113010    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:17.124677    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:17.124692    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:17.152367    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:17.152377    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:17.164103    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:17.164112    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:45:19.677277    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:24.679563    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:24.680052    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:24.724122    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:24.724299    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:24.743924    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:24.744036    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:24.758959    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:24.759056    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:24.771212    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:24.771300    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:24.782242    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:24.782317    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:24.797429    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:24.797501    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:24.807693    5160 logs.go:276] 0 containers: []
	W0927 10:45:24.807705    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:24.807773    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:24.817761    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:24.817774    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:24.817779    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:24.831315    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:24.831333    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:24.843362    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:24.843372    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:24.860324    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:24.860335    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:24.871854    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:24.871864    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:45:24.883212    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:24.883225    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:24.887654    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:24.887664    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:24.921917    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:24.921932    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:24.936415    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:24.936425    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:24.951547    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:24.951558    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:24.963383    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:24.963394    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:24.980278    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:24.980292    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:25.005485    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:25.005496    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:27.542601    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:32.543796    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:32.544422    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:32.584315    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:32.584472    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:32.606479    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:32.606593    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:32.621762    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:32.621856    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:32.634323    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:32.634395    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:32.645408    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:32.645477    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:32.656548    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:32.656623    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:32.666836    5160 logs.go:276] 0 containers: []
	W0927 10:45:32.666849    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:32.666919    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:32.682160    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:32.682175    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:32.682180    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:32.686359    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:32.686367    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:32.699709    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:32.699719    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:32.711828    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:32.711837    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:32.725800    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:32.725810    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:32.737680    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:32.737691    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:32.752758    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:32.752771    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:32.774986    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:32.774997    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:32.786342    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:32.786351    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:32.821731    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:32.821739    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:32.856953    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:32.856967    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:32.871085    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:32.871099    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:32.895221    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:32.895230    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:45:35.408815    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:40.411474    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:40.412053    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:40.456014    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:40.456167    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:40.475484    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:40.475586    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:40.489536    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:40.489624    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:40.501506    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:40.501589    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:40.512164    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:40.512241    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:40.522319    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:40.522408    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:40.533926    5160 logs.go:276] 0 containers: []
	W0927 10:45:40.533938    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:40.534004    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:40.544485    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:40.544499    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:40.544506    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:40.556089    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:40.556098    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:40.581102    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:40.581109    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:40.595390    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:40.595401    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:40.608847    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:40.608860    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:40.620346    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:40.620357    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:40.635889    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:40.635900    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:40.647833    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:40.647844    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:40.664953    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:40.664963    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:40.698857    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:40.698867    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:40.703010    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:40.703019    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:40.739505    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:40.739516    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:40.756629    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:40.756642    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:45:43.271683    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:48.274407    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:48.274965    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:48.321667    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:48.321814    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:48.341729    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:48.341823    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:48.356477    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:48.356573    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:48.368585    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:48.368669    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:48.383771    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:48.383862    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:48.395544    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:48.395631    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:48.406011    5160 logs.go:276] 0 containers: []
	W0927 10:45:48.406026    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:48.406099    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:48.416503    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:48.416522    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:48.416528    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:48.451628    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:48.451637    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:48.486421    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:48.486432    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:48.498903    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:48.498913    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:48.510327    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:48.510338    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:48.521907    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:48.521917    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:48.539033    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:48.539047    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:45:48.550811    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:48.550820    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:48.555063    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:48.555071    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:48.569381    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:48.569390    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:48.584123    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:48.584140    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:48.599508    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:48.599524    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:48.615148    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:48.615158    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:51.140972    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:45:56.143388    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:45:56.143893    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:45:56.180515    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:45:56.180673    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:45:56.203973    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:45:56.204079    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:45:56.218223    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:45:56.218316    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:45:56.229858    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:45:56.229931    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:45:56.240945    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:45:56.241015    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:45:56.251365    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:45:56.251444    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:45:56.262134    5160 logs.go:276] 0 containers: []
	W0927 10:45:56.262146    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:45:56.262211    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:45:56.273047    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:45:56.273067    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:45:56.273072    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:45:56.277727    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:45:56.277736    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:45:56.292119    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:45:56.292128    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:45:56.307761    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:45:56.307770    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:45:56.319757    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:45:56.319772    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:45:56.337145    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:45:56.337157    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:45:56.349535    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:45:56.349544    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:45:56.385678    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:45:56.385685    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:45:56.419953    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:45:56.419965    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:45:56.434356    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:45:56.434369    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:45:56.462660    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:45:56.462673    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:45:56.480959    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:45:56.480968    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:45:56.492546    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:45:56.492560    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:45:59.017991    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:04.020357    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:04.020864    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:04.061156    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:04.061314    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:04.082471    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:04.082581    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:04.096990    5160 logs.go:276] 2 containers: [2993db39b491 e27ee549c2f1]
	I0927 10:46:04.097073    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:04.109429    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:04.109512    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:04.120136    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:04.120210    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:04.130769    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:04.130850    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:04.141180    5160 logs.go:276] 0 containers: []
	W0927 10:46:04.141189    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:04.141250    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:04.152062    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:04.152078    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:04.152084    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:04.170248    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:04.170258    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:04.181545    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:04.181559    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:04.206316    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:04.206323    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:04.241201    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:04.241207    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:04.245362    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:04.245368    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:04.259236    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:04.259249    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:04.271958    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:04.271970    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:04.286995    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:04.287008    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:04.299447    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:04.299460    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:04.333623    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:04.333634    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:04.347769    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:04.347782    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:46:04.359416    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:04.359428    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:06.872778    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:11.873354    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:11.873762    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:11.906859    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:11.907012    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:11.926939    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:11.927048    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:11.941146    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:46:11.941236    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:11.952890    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:11.952984    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:11.963648    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:11.963727    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:11.974559    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:11.974630    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:11.985025    5160 logs.go:276] 0 containers: []
	W0927 10:46:11.985037    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:11.985105    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:11.999297    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:11.999314    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:46:11.999320    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:46:12.010475    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:12.010488    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:12.024365    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:12.024378    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:12.038998    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:12.039007    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:12.050815    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:12.050824    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:46:12.063053    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:12.063063    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:12.078238    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:12.078247    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:12.089688    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:12.089698    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:12.114677    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:12.114691    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:12.150343    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:12.150351    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:12.184676    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:12.184686    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:12.196750    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:12.196758    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:12.208170    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:12.208177    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:12.228806    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:12.228815    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:12.233547    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:46:12.233556    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:46:14.746578    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:19.749198    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:19.749475    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:19.774481    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:19.774614    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:19.791306    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:19.791414    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:19.805253    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:46:19.805343    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:19.816437    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:19.816515    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:19.828227    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:19.828321    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:19.843366    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:19.843456    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:19.853247    5160 logs.go:276] 0 containers: []
	W0927 10:46:19.853257    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:19.853336    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:19.865039    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:19.865056    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:19.865062    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:19.869187    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:46:19.869193    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:46:19.880364    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:19.880375    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:46:19.892051    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:19.892061    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:19.930099    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:19.930111    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:19.944053    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:19.944065    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:19.959916    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:19.959929    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:19.970917    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:19.970928    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:19.995058    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:19.995067    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:20.006751    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:20.006761    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:20.041381    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:20.041391    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:20.055194    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:46:20.055203    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:46:20.066634    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:20.066645    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:20.083880    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:20.083890    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:20.095592    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:20.095602    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:22.608594    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:27.611197    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:27.611755    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:27.651507    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:27.651678    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:27.673283    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:27.673406    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:27.688648    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:46:27.688750    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:27.707761    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:27.707840    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:27.718658    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:27.718735    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:27.728767    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:27.728865    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:27.739309    5160 logs.go:276] 0 containers: []
	W0927 10:46:27.739323    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:27.739398    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:27.750104    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:27.750120    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:27.750125    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:27.764481    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:27.764491    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:27.778338    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:27.778348    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:27.798495    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:27.798505    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:27.822084    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:27.822094    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:27.856681    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:46:27.856695    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:46:27.868429    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:27.868443    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:27.885667    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:27.885680    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:27.896894    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:27.896908    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:27.901174    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:46:27.901183    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:46:27.912540    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:27.912549    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:46:27.924714    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:27.924725    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:27.936126    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:27.936139    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:27.970537    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:27.970544    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:27.988293    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:27.988308    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:30.500952    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:35.503272    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:35.503859    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:35.544215    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:35.544409    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:35.566409    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:35.566531    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:35.586281    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:46:35.586373    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:35.598607    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:35.598688    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:35.609496    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:35.609575    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:35.624424    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:35.624504    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:35.634759    5160 logs.go:276] 0 containers: []
	W0927 10:46:35.634770    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:35.634842    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:35.644860    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:35.644874    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:35.644879    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:35.662218    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:35.662228    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:35.673886    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:35.673896    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:35.707937    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:46:35.707949    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:46:35.719804    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:35.719816    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:46:35.731649    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:35.731661    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:35.746920    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:35.746928    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:35.782693    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:35.782703    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:35.787136    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:46:35.787144    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:46:35.798401    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:35.798410    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:35.812426    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:35.812439    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:35.824483    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:35.824495    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:35.835897    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:35.835908    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:35.850774    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:35.850785    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:35.862720    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:35.862731    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:38.393989    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:43.396661    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:43.397213    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:43.440850    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:43.441009    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:43.460261    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:43.460370    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:43.474899    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:46:43.474982    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:43.486995    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:43.487073    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:43.497824    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:43.497899    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:43.508424    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:43.508503    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:43.518024    5160 logs.go:276] 0 containers: []
	W0927 10:46:43.518033    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:43.518092    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:43.528163    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:43.528180    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:46:43.528186    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:46:43.539292    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:43.539302    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:43.550642    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:43.550657    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:43.554980    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:43.554989    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:43.588722    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:43.588733    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:43.604463    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:43.604473    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:46:43.616089    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:43.616099    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:43.631465    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:43.631475    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:43.665607    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:43.665614    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:43.677403    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:43.677413    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:43.688976    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:43.688986    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:43.706375    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:43.706385    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:43.729607    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:43.729616    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:43.746813    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:46:43.746823    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:46:43.758929    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:43.758938    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:46.273196    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:51.275767    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:51.275857    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:51.292110    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:51.292198    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:51.303469    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:51.303532    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:51.314515    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:46:51.314575    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:51.326050    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:51.326137    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:51.338346    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:51.338408    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:51.349615    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:51.349686    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:51.360621    5160 logs.go:276] 0 containers: []
	W0927 10:46:51.360633    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:51.360701    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:51.371490    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:51.371504    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:51.371509    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:51.407183    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:51.407197    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:51.426683    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:51.426694    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:51.438672    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:51.438681    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:51.463917    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:51.463930    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:51.477095    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:51.477104    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:51.488894    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:51.488906    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:51.493115    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:51.493125    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:51.529766    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:46:51.529779    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:46:51.542642    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:51.542654    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:51.561971    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:51.561982    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:51.578193    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:46:51.578207    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:46:51.591878    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:51.591889    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:46:51.604685    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:51.604695    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:51.622219    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:51.622232    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:54.138886    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:46:59.141430    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:46:59.141672    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:46:59.160229    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:46:59.160346    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:46:59.174070    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:46:59.174161    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:46:59.186427    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:46:59.186516    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:46:59.196828    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:46:59.196904    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:46:59.207299    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:46:59.207370    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:46:59.227522    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:46:59.227603    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:46:59.238265    5160 logs.go:276] 0 containers: []
	W0927 10:46:59.238276    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:46:59.238349    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:46:59.248468    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:46:59.248485    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:46:59.248491    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:46:59.260167    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:46:59.260181    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:46:59.283635    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:46:59.283642    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:46:59.317749    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:46:59.317757    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:46:59.321787    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:46:59.321796    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:46:59.336160    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:46:59.336175    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:46:59.347538    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:46:59.347551    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:46:59.359315    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:46:59.359325    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:46:59.370903    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:46:59.370912    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:46:59.388116    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:46:59.388126    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:46:59.409591    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:46:59.409600    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:46:59.449491    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:46:59.449502    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:46:59.464994    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:46:59.465004    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:46:59.476903    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:46:59.476913    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:46:59.490212    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:46:59.490221    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:02.007916    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:47:07.009744    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:47:07.009916    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:47:07.031720    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:47:07.031801    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:47:07.045318    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:47:07.045402    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:47:07.056325    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:47:07.056403    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:47:07.066724    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:47:07.066792    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:47:07.076895    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:47:07.076971    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:47:07.087543    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:47:07.087619    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:47:07.097497    5160 logs.go:276] 0 containers: []
	W0927 10:47:07.097508    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:47:07.097589    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:47:07.112200    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:47:07.112216    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:47:07.112222    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:47:07.145805    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:47:07.145814    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:07.159497    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:47:07.159507    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:47:07.175223    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:47:07.175238    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:47:07.200232    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:47:07.200242    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:47:07.234067    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:47:07.234078    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:47:07.245638    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:47:07.245648    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:47:07.257381    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:47:07.257390    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:47:07.268578    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:47:07.268588    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:47:07.272647    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:47:07.272656    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:47:07.286635    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:47:07.286645    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:47:07.307981    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:47:07.307990    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:47:07.322925    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:47:07.322941    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:47:07.334338    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:47:07.334348    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:47:07.345944    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:47:07.345954    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:47:09.862871    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:47:14.863489    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:47:14.863561    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:47:14.875729    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:47:14.875817    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:47:14.887298    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:47:14.887394    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:47:14.900096    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:47:14.900157    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:47:14.910711    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:47:14.910785    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:47:14.921598    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:47:14.921672    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:47:14.933511    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:47:14.933592    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:47:14.945162    5160 logs.go:276] 0 containers: []
	W0927 10:47:14.945177    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:47:14.945237    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:47:14.957037    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:47:14.957054    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:47:14.957060    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:47:14.972782    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:47:14.972794    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:47:14.985868    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:47:14.985881    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:47:14.998233    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:47:14.998245    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:47:15.014648    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:47:15.014659    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:47:15.040343    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:47:15.040362    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:47:15.077359    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:47:15.077381    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:47:15.082169    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:47:15.082179    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:47:15.118868    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:47:15.118880    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:47:15.134117    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:47:15.134129    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:47:15.146552    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:47:15.146565    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:47:15.161932    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:47:15.161946    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:47:15.179432    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:47:15.179443    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:47:15.192437    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:47:15.192449    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:15.208301    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:47:15.208313    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:47:17.728295    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:47:22.729433    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:47:22.730209    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:47:22.758248    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:47:22.758407    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:47:22.775857    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:47:22.775952    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:47:22.789793    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:47:22.789885    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:47:22.801447    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:47:22.801531    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:47:22.811855    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:47:22.811932    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:47:22.824324    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:47:22.824397    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:47:22.835239    5160 logs.go:276] 0 containers: []
	W0927 10:47:22.835249    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:47:22.835309    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:47:22.845559    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:47:22.845577    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:47:22.845583    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:47:22.879825    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:47:22.879840    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:47:22.891985    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:47:22.891995    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:47:22.903860    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:47:22.903870    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:47:22.939601    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:47:22.939611    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:47:22.951708    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:47:22.951724    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:47:22.963903    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:47:22.963912    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:22.975494    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:47:22.975505    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:47:22.992275    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:47:22.992287    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:47:23.005663    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:47:23.005673    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:47:23.023400    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:47:23.023414    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:47:23.035045    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:47:23.035058    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:47:23.050837    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:47:23.050849    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:47:23.066799    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:47:23.066811    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:47:23.091487    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:47:23.091494    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:47:25.597227    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:47:30.599044    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:47:30.599644    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:47:30.644049    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:47:30.644190    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:47:30.663785    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:47:30.663905    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:47:30.677992    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:47:30.678081    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:47:30.691944    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:47:30.692018    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:47:30.703178    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:47:30.703263    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:47:30.713778    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:47:30.713859    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:47:30.724429    5160 logs.go:276] 0 containers: []
	W0927 10:47:30.724446    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:47:30.724507    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:47:30.735268    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:47:30.735283    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:47:30.735288    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:47:30.746580    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:47:30.746590    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:47:30.757754    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:47:30.757764    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:47:30.775402    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:47:30.775411    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:47:30.780066    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:47:30.780074    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:47:30.791856    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:47:30.791869    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:30.805629    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:47:30.805639    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:47:30.841218    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:47:30.841225    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:47:30.855662    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:47:30.855671    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:47:30.869500    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:47:30.869510    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:47:30.881409    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:47:30.881419    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:47:30.893066    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:47:30.893077    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:47:30.917276    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:47:30.917283    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:47:30.951534    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:47:30.951549    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:47:30.963563    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:47:30.963577    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:47:33.481295    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:47:38.482246    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:47:38.482347    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:47:38.495032    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:47:38.495113    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:47:38.508699    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:47:38.508791    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:47:38.520534    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:47:38.520637    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:47:38.532324    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:47:38.532394    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:47:38.550709    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:47:38.550812    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:47:38.562753    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:47:38.562826    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:47:38.573624    5160 logs.go:276] 0 containers: []
	W0927 10:47:38.573637    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:47:38.573705    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:47:38.590470    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:47:38.590489    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:47:38.590495    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:47:38.610267    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:47:38.610278    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:47:38.648973    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:47:38.648989    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:47:38.669601    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:47:38.669612    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:47:38.684121    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:47:38.684134    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:38.697909    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:47:38.697921    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:47:38.711907    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:47:38.711918    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:47:38.716620    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:47:38.716628    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:47:38.737115    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:47:38.737129    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:47:38.750783    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:47:38.750801    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:47:38.763503    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:47:38.763512    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:47:38.779503    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:47:38.779521    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:47:38.796459    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:47:38.796475    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:47:38.825257    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:47:38.825273    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:47:38.862603    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:47:38.862620    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:47:41.378301    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:47:46.380653    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:47:46.381153    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:47:46.421725    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:47:46.421881    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:47:46.440673    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:47:46.440785    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:47:46.473520    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:47:46.473609    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:47:46.503270    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:47:46.503360    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:47:46.517192    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:47:46.517277    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:47:46.527596    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:47:46.527675    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:47:46.539061    5160 logs.go:276] 0 containers: []
	W0927 10:47:46.539074    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:47:46.539146    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:47:46.549495    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:47:46.549519    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:47:46.549525    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:47:46.564901    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:47:46.564915    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:47:46.568891    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:47:46.568897    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:47:46.641864    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:47:46.641877    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:47:46.657038    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:47:46.657049    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:47:46.675112    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:47:46.675122    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:47:46.692557    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:47:46.692570    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:47:46.704480    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:47:46.704490    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:46.716215    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:47:46.716225    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:47:46.728405    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:47:46.728417    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:47:46.751092    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:47:46.751098    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:47:46.784485    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:47:46.784495    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:47:46.799081    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:47:46.799092    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:47:46.813360    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:47:46.813369    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:47:46.824440    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:47:46.824452    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:47:49.343761    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:47:54.345739    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:47:54.345928    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:47:54.357585    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:47:54.357671    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:47:54.369813    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:47:54.369889    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:47:54.380523    5160 logs.go:276] 4 containers: [cec2fdd78c96 73905e11921a 2993db39b491 e27ee549c2f1]
	I0927 10:47:54.380594    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:47:54.390696    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:47:54.390777    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:47:54.401045    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:47:54.401118    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:47:54.415343    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:47:54.415423    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:47:54.428913    5160 logs.go:276] 0 containers: []
	W0927 10:47:54.428925    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:47:54.429013    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:47:54.439428    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:47:54.439443    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:47:54.439450    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:47:54.472665    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:47:54.472676    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:47:54.486261    5160 logs.go:123] Gathering logs for coredns [e27ee549c2f1] ...
	I0927 10:47:54.486274    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27ee549c2f1"
	I0927 10:47:54.500236    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:47:54.500250    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:47:54.506543    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:47:54.506554    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:47:54.518224    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:47:54.518236    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:47:54.555224    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:47:54.555238    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:47:54.566951    5160 logs.go:123] Gathering logs for coredns [2993db39b491] ...
	I0927 10:47:54.566961    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2993db39b491"
	I0927 10:47:54.578644    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:47:54.578655    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:47:54.590924    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:47:54.590938    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:47:54.605520    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:47:54.605530    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:47:54.619520    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:47:54.619530    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:47:54.634117    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:47:54.634126    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:47:54.651239    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:47:54.651250    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:47:54.675787    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:47:54.675797    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:47:57.189800    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:48:02.192236    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:48:02.192736    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0927 10:48:02.224353    5160 logs.go:276] 1 containers: [1af100bdb4ec]
	I0927 10:48:02.224502    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0927 10:48:02.243982    5160 logs.go:276] 1 containers: [bcf01e04714d]
	I0927 10:48:02.244095    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0927 10:48:02.258116    5160 logs.go:276] 4 containers: [27951c8924fc e8e1553d8101 cec2fdd78c96 73905e11921a]
	I0927 10:48:02.258208    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0927 10:48:02.269902    5160 logs.go:276] 1 containers: [7d1ef6c9345b]
	I0927 10:48:02.269970    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0927 10:48:02.280452    5160 logs.go:276] 1 containers: [fe87387aef42]
	I0927 10:48:02.280514    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0927 10:48:02.296179    5160 logs.go:276] 1 containers: [79bff4d6b850]
	I0927 10:48:02.296261    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0927 10:48:02.306203    5160 logs.go:276] 0 containers: []
	W0927 10:48:02.306218    5160 logs.go:278] No container was found matching "kindnet"
	I0927 10:48:02.306284    5160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0927 10:48:02.316649    5160 logs.go:276] 1 containers: [da4f67acc28e]
	I0927 10:48:02.316665    5160 logs.go:123] Gathering logs for kube-proxy [fe87387aef42] ...
	I0927 10:48:02.316672    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe87387aef42"
	I0927 10:48:02.328217    5160 logs.go:123] Gathering logs for kube-controller-manager [79bff4d6b850] ...
	I0927 10:48:02.328230    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bff4d6b850"
	I0927 10:48:02.349171    5160 logs.go:123] Gathering logs for kube-apiserver [1af100bdb4ec] ...
	I0927 10:48:02.349180    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1af100bdb4ec"
	I0927 10:48:02.363884    5160 logs.go:123] Gathering logs for storage-provisioner [da4f67acc28e] ...
	I0927 10:48:02.363896    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da4f67acc28e"
	I0927 10:48:02.375550    5160 logs.go:123] Gathering logs for etcd [bcf01e04714d] ...
	I0927 10:48:02.375559    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcf01e04714d"
	I0927 10:48:02.389704    5160 logs.go:123] Gathering logs for coredns [27951c8924fc] ...
	I0927 10:48:02.389713    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27951c8924fc"
	I0927 10:48:02.400994    5160 logs.go:123] Gathering logs for coredns [e8e1553d8101] ...
	I0927 10:48:02.401007    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8e1553d8101"
	I0927 10:48:02.413815    5160 logs.go:123] Gathering logs for coredns [73905e11921a] ...
	I0927 10:48:02.413826    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73905e11921a"
	I0927 10:48:02.429594    5160 logs.go:123] Gathering logs for kube-scheduler [7d1ef6c9345b] ...
	I0927 10:48:02.429612    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d1ef6c9345b"
	I0927 10:48:02.445616    5160 logs.go:123] Gathering logs for kubelet ...
	I0927 10:48:02.445631    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 10:48:02.483035    5160 logs.go:123] Gathering logs for dmesg ...
	I0927 10:48:02.483052    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 10:48:02.487743    5160 logs.go:123] Gathering logs for describe nodes ...
	I0927 10:48:02.487753    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 10:48:02.527332    5160 logs.go:123] Gathering logs for coredns [cec2fdd78c96] ...
	I0927 10:48:02.527344    5160 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cec2fdd78c96"
	I0927 10:48:02.539819    5160 logs.go:123] Gathering logs for Docker ...
	I0927 10:48:02.539829    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0927 10:48:02.565788    5160 logs.go:123] Gathering logs for container status ...
	I0927 10:48:02.565809    5160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 10:48:05.085365    5160 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0927 10:48:10.087452    5160 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 10:48:10.093710    5160 out.go:201] 
	W0927 10:48:10.111057    5160 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0927 10:48:10.111118    5160 out.go:270] * 
	* 
	W0927 10:48:10.113818    5160 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:10.124714    5160 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-862000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (563.31s)

                                                
                                    
x
+
TestPause/serial/Start (10.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-766000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-766000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.012743625s)

                                                
                                                
-- stdout --
	* [pause-766000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-766000" primary control-plane node in "pause-766000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-766000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-766000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-766000 -n pause-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-766000 -n pause-766000: exit status 7 (60.683667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-882000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-882000 --driver=qemu2 : exit status 80 (9.891045625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-882000" primary control-plane node in "NoKubernetes-882000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-882000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-882000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-882000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000: exit status 7 (51.4975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245510291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-882000
	* Restarting existing qemu2 VM for "NoKubernetes-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-882000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000: exit status 7 (58.611958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245397459s)

                                                
                                                
-- stdout --
	* [NoKubernetes-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-882000
	* Restarting existing qemu2 VM for "NoKubernetes-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-882000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000: exit status 7 (50.286625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-882000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-882000 --driver=qemu2 : exit status 80 (5.285650458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-882000
	* Restarting existing qemu2 VM for "NoKubernetes-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-882000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-882000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-882000 -n NoKubernetes-882000: exit status 7 (65.811542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.894950375s)

                                                
                                                
-- stdout --
	* [auto-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-770000" primary control-plane node in "auto-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:46:40.064796    5450 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:46:40.064909    5450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:46:40.064912    5450 out.go:358] Setting ErrFile to fd 2...
	I0927 10:46:40.064914    5450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:46:40.065044    5450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:46:40.066120    5450 out.go:352] Setting JSON to false
	I0927 10:46:40.082078    5450 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4564,"bootTime":1727454636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:46:40.082143    5450 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:46:40.088144    5450 out.go:177] * [auto-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:46:40.096099    5450 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:46:40.096155    5450 notify.go:220] Checking for updates...
	I0927 10:46:40.104130    5450 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:46:40.107135    5450 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:46:40.110084    5450 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:46:40.113115    5450 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:46:40.116059    5450 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:46:40.119434    5450 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:46:40.119502    5450 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:46:40.119552    5450 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:46:40.124102    5450 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:46:40.131134    5450 start.go:297] selected driver: qemu2
	I0927 10:46:40.131140    5450 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:46:40.131147    5450 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:46:40.133302    5450 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:46:40.136133    5450 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:46:40.139172    5450 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:46:40.139188    5450 cni.go:84] Creating CNI manager for ""
	I0927 10:46:40.139213    5450 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:46:40.139220    5450 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:46:40.139241    5450 start.go:340] cluster config:
	{Name:auto-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:46:40.142616    5450 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:46:40.152146    5450 out.go:177] * Starting "auto-770000" primary control-plane node in "auto-770000" cluster
	I0927 10:46:40.156985    5450 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:46:40.157012    5450 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:46:40.157023    5450 cache.go:56] Caching tarball of preloaded images
	I0927 10:46:40.157104    5450 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:46:40.157109    5450 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:46:40.157191    5450 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/auto-770000/config.json ...
	I0927 10:46:40.157204    5450 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/auto-770000/config.json: {Name:mk7dac82df1e34dea8a92a477a1fbd2105c119a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:46:40.157732    5450 start.go:360] acquireMachinesLock for auto-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:46:40.157768    5450 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "auto-770000"
	I0927 10:46:40.157780    5450 start.go:93] Provisioning new machine with config: &{Name:auto-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:46:40.157812    5450 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:46:40.168198    5450 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:46:40.184065    5450 start.go:159] libmachine.API.Create for "auto-770000" (driver="qemu2")
	I0927 10:46:40.184099    5450 client.go:168] LocalClient.Create starting
	I0927 10:46:40.184151    5450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:46:40.184178    5450 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:40.184187    5450 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:40.184222    5450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:46:40.184248    5450 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:40.184256    5450 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:40.184583    5450 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:46:40.358016    5450 main.go:141] libmachine: Creating SSH key...
	I0927 10:46:40.435327    5450 main.go:141] libmachine: Creating Disk image...
	I0927 10:46:40.435338    5450 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:46:40.435556    5450 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2
	I0927 10:46:40.445171    5450 main.go:141] libmachine: STDOUT: 
	I0927 10:46:40.445189    5450 main.go:141] libmachine: STDERR: 
	I0927 10:46:40.445257    5450 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2 +20000M
	I0927 10:46:40.453228    5450 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:46:40.453255    5450 main.go:141] libmachine: STDERR: 
	I0927 10:46:40.453270    5450 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2
	I0927 10:46:40.453274    5450 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:46:40.453286    5450 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:46:40.453316    5450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:b5:f8:f8:cf:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2
	I0927 10:46:40.454972    5450 main.go:141] libmachine: STDOUT: 
	I0927 10:46:40.454990    5450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:46:40.455014    5450 client.go:171] duration metric: took 270.916708ms to LocalClient.Create
	I0927 10:46:42.457167    5450 start.go:128] duration metric: took 2.299382542s to createHost
	I0927 10:46:42.457262    5450 start.go:83] releasing machines lock for "auto-770000", held for 2.29954575s
	W0927 10:46:42.457355    5450 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:46:42.473189    5450 out.go:177] * Deleting "auto-770000" in qemu2 ...
	W0927 10:46:42.506497    5450 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:46:42.506521    5450 start.go:729] Will try again in 5 seconds ...
	I0927 10:46:47.508651    5450 start.go:360] acquireMachinesLock for auto-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:46:47.509226    5450 start.go:364] duration metric: took 434.041µs to acquireMachinesLock for "auto-770000"
	I0927 10:46:47.509375    5450 start.go:93] Provisioning new machine with config: &{Name:auto-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:46:47.509624    5450 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:46:47.517316    5450 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:46:47.566530    5450 start.go:159] libmachine.API.Create for "auto-770000" (driver="qemu2")
	I0927 10:46:47.566586    5450 client.go:168] LocalClient.Create starting
	I0927 10:46:47.566702    5450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:46:47.566773    5450 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:47.566791    5450 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:47.566856    5450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:46:47.566901    5450 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:47.566914    5450 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:47.567771    5450 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:46:47.734299    5450 main.go:141] libmachine: Creating SSH key...
	I0927 10:46:47.861150    5450 main.go:141] libmachine: Creating Disk image...
	I0927 10:46:47.861157    5450 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:46:47.861374    5450 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2
	I0927 10:46:47.871226    5450 main.go:141] libmachine: STDOUT: 
	I0927 10:46:47.871243    5450 main.go:141] libmachine: STDERR: 
	I0927 10:46:47.871299    5450 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2 +20000M
	I0927 10:46:47.879326    5450 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:46:47.879345    5450 main.go:141] libmachine: STDERR: 
	I0927 10:46:47.879358    5450 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2
	I0927 10:46:47.879362    5450 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:46:47.879372    5450 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:46:47.879411    5450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:e3:a4:b6:19:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/auto-770000/disk.qcow2
	I0927 10:46:47.881114    5450 main.go:141] libmachine: STDOUT: 
	I0927 10:46:47.881127    5450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:46:47.881139    5450 client.go:171] duration metric: took 314.554042ms to LocalClient.Create
	I0927 10:46:49.883311    5450 start.go:128] duration metric: took 2.373686209s to createHost
	I0927 10:46:49.883435    5450 start.go:83] releasing machines lock for "auto-770000", held for 2.374227541s
	W0927 10:46:49.883789    5450 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:46:49.898515    5450 out.go:201] 
	W0927 10:46:49.903605    5450 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:46:49.903636    5450 out.go:270] * 
	* 
	W0927 10:46:49.906331    5450 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:46:49.920552    5450 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0927 10:46:56.694442    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.827695209s)

                                                
                                                
-- stdout --
	* [kindnet-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-770000" primary control-plane node in "kindnet-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:46:52.114747    5563 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:46:52.114887    5563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:46:52.114891    5563 out.go:358] Setting ErrFile to fd 2...
	I0927 10:46:52.114894    5563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:46:52.115023    5563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:46:52.116058    5563 out.go:352] Setting JSON to false
	I0927 10:46:52.132925    5563 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4576,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:46:52.133006    5563 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:46:52.140098    5563 out.go:177] * [kindnet-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:46:52.148873    5563 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:46:52.148929    5563 notify.go:220] Checking for updates...
	I0927 10:46:52.155844    5563 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:46:52.158854    5563 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:46:52.161891    5563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:46:52.164856    5563 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:46:52.167851    5563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:46:52.171198    5563 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:46:52.171258    5563 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:46:52.171308    5563 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:46:52.175848    5563 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:46:52.182881    5563 start.go:297] selected driver: qemu2
	I0927 10:46:52.182888    5563 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:46:52.182896    5563 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:46:52.185134    5563 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:46:52.187746    5563 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:46:52.190987    5563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:46:52.191021    5563 cni.go:84] Creating CNI manager for "kindnet"
	I0927 10:46:52.191024    5563 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 10:46:52.191050    5563 start.go:340] cluster config:
	{Name:kindnet-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:46:52.194401    5563 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:46:52.201752    5563 out.go:177] * Starting "kindnet-770000" primary control-plane node in "kindnet-770000" cluster
	I0927 10:46:52.205822    5563 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:46:52.205840    5563 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:46:52.205848    5563 cache.go:56] Caching tarball of preloaded images
	I0927 10:46:52.205895    5563 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:46:52.205900    5563 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:46:52.205946    5563 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/kindnet-770000/config.json ...
	I0927 10:46:52.205955    5563 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/kindnet-770000/config.json: {Name:mkf5f6ee12481fc96930eb0c231b50901aeecc1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:46:52.206165    5563 start.go:360] acquireMachinesLock for kindnet-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:46:52.206194    5563 start.go:364] duration metric: took 23.834µs to acquireMachinesLock for "kindnet-770000"
	I0927 10:46:52.206204    5563 start.go:93] Provisioning new machine with config: &{Name:kindnet-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:46:52.206235    5563 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:46:52.213858    5563 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:46:52.228883    5563 start.go:159] libmachine.API.Create for "kindnet-770000" (driver="qemu2")
	I0927 10:46:52.228909    5563 client.go:168] LocalClient.Create starting
	I0927 10:46:52.228974    5563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:46:52.229003    5563 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:52.229012    5563 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:52.229058    5563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:46:52.229090    5563 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:52.229098    5563 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:52.229526    5563 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:46:52.390522    5563 main.go:141] libmachine: Creating SSH key...
	I0927 10:46:52.464284    5563 main.go:141] libmachine: Creating Disk image...
	I0927 10:46:52.464292    5563 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:46:52.464529    5563 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2
	I0927 10:46:52.473964    5563 main.go:141] libmachine: STDOUT: 
	I0927 10:46:52.473993    5563 main.go:141] libmachine: STDERR: 
	I0927 10:46:52.474049    5563 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2 +20000M
	I0927 10:46:52.482096    5563 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:46:52.482111    5563 main.go:141] libmachine: STDERR: 
	I0927 10:46:52.482127    5563 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2
	I0927 10:46:52.482133    5563 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:46:52.482146    5563 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:46:52.482173    5563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:96:f5:3c:6a:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2
	I0927 10:46:52.483771    5563 main.go:141] libmachine: STDOUT: 
	I0927 10:46:52.483782    5563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:46:52.483803    5563 client.go:171] duration metric: took 254.896666ms to LocalClient.Create
	I0927 10:46:54.485960    5563 start.go:128] duration metric: took 2.2797575s to createHost
	I0927 10:46:54.486036    5563 start.go:83] releasing machines lock for "kindnet-770000", held for 2.279893958s
	W0927 10:46:54.486105    5563 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:46:54.502985    5563 out.go:177] * Deleting "kindnet-770000" in qemu2 ...
	W0927 10:46:54.535954    5563 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:46:54.535977    5563 start.go:729] Will try again in 5 seconds ...
	I0927 10:46:59.537953    5563 start.go:360] acquireMachinesLock for kindnet-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:46:59.538070    5563 start.go:364] duration metric: took 99.375µs to acquireMachinesLock for "kindnet-770000"
	I0927 10:46:59.538084    5563 start.go:93] Provisioning new machine with config: &{Name:kindnet-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:46:59.538162    5563 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:46:59.545514    5563 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:46:59.562221    5563 start.go:159] libmachine.API.Create for "kindnet-770000" (driver="qemu2")
	I0927 10:46:59.562245    5563 client.go:168] LocalClient.Create starting
	I0927 10:46:59.562306    5563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:46:59.562350    5563 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:59.562360    5563 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:59.562402    5563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:46:59.562427    5563 main.go:141] libmachine: Decoding PEM data...
	I0927 10:46:59.562435    5563 main.go:141] libmachine: Parsing certificate...
	I0927 10:46:59.562732    5563 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:46:59.722246    5563 main.go:141] libmachine: Creating SSH key...
	I0927 10:46:59.846605    5563 main.go:141] libmachine: Creating Disk image...
	I0927 10:46:59.846614    5563 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:46:59.846831    5563 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2
	I0927 10:46:59.856301    5563 main.go:141] libmachine: STDOUT: 
	I0927 10:46:59.856314    5563 main.go:141] libmachine: STDERR: 
	I0927 10:46:59.856382    5563 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2 +20000M
	I0927 10:46:59.864126    5563 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:46:59.864140    5563 main.go:141] libmachine: STDERR: 
	I0927 10:46:59.864156    5563 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2
	I0927 10:46:59.864160    5563 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:46:59.864172    5563 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:46:59.864208    5563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:6c:ad:75:30:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kindnet-770000/disk.qcow2
	I0927 10:46:59.865880    5563 main.go:141] libmachine: STDOUT: 
	I0927 10:46:59.865894    5563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:46:59.865907    5563 client.go:171] duration metric: took 303.666834ms to LocalClient.Create
	I0927 10:47:01.868083    5563 start.go:128] duration metric: took 2.329949083s to createHost
	I0927 10:47:01.868185    5563 start.go:83] releasing machines lock for "kindnet-770000", held for 2.330163792s
	W0927 10:47:01.868635    5563 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:01.878250    5563 out.go:201] 
	W0927 10:47:01.888399    5563 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:47:01.888426    5563 out.go:270] * 
	* 
	W0927 10:47:01.890798    5563 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:47:01.900146    5563 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.796310166s)

                                                
                                                
-- stdout --
	* [calico-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-770000" primary control-plane node in "calico-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:47:04.170838    5676 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:47:04.170970    5676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:04.170974    5676 out.go:358] Setting ErrFile to fd 2...
	I0927 10:47:04.170976    5676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:04.171121    5676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:47:04.172337    5676 out.go:352] Setting JSON to false
	I0927 10:47:04.188814    5676 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4588,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:47:04.188899    5676 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:47:04.195686    5676 out.go:177] * [calico-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:47:04.202618    5676 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:47:04.202693    5676 notify.go:220] Checking for updates...
	I0927 10:47:04.210552    5676 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:47:04.213535    5676 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:47:04.216579    5676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:47:04.219528    5676 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:47:04.222520    5676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:47:04.225927    5676 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:47:04.225989    5676 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:47:04.226037    5676 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:47:04.229438    5676 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:47:04.236526    5676 start.go:297] selected driver: qemu2
	I0927 10:47:04.236532    5676 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:47:04.236538    5676 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:47:04.238789    5676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:47:04.240046    5676 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:47:04.242681    5676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:47:04.242697    5676 cni.go:84] Creating CNI manager for "calico"
	I0927 10:47:04.242703    5676 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0927 10:47:04.242738    5676 start.go:340] cluster config:
	{Name:calico-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:47:04.246336    5676 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:47:04.253508    5676 out.go:177] * Starting "calico-770000" primary control-plane node in "calico-770000" cluster
	I0927 10:47:04.257548    5676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:47:04.257564    5676 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:47:04.257581    5676 cache.go:56] Caching tarball of preloaded images
	I0927 10:47:04.257646    5676 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:47:04.257651    5676 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:47:04.257715    5676 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/calico-770000/config.json ...
	I0927 10:47:04.257726    5676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/calico-770000/config.json: {Name:mk8832904d9d75c7471b43d3b76792653d11b54e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:47:04.257951    5676 start.go:360] acquireMachinesLock for calico-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:04.257985    5676 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "calico-770000"
	I0927 10:47:04.257996    5676 start.go:93] Provisioning new machine with config: &{Name:calico-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:04.258023    5676 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:04.265514    5676 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:04.282443    5676 start.go:159] libmachine.API.Create for "calico-770000" (driver="qemu2")
	I0927 10:47:04.282474    5676 client.go:168] LocalClient.Create starting
	I0927 10:47:04.282548    5676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:04.282581    5676 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:04.282590    5676 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:04.282629    5676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:04.282655    5676 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:04.282661    5676 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:04.283131    5676 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:04.447626    5676 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:04.543241    5676 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:04.543251    5676 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:04.543462    5676 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2
	I0927 10:47:04.552588    5676 main.go:141] libmachine: STDOUT: 
	I0927 10:47:04.552606    5676 main.go:141] libmachine: STDERR: 
	I0927 10:47:04.552703    5676 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2 +20000M
	I0927 10:47:04.560576    5676 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:04.560590    5676 main.go:141] libmachine: STDERR: 
	I0927 10:47:04.560613    5676 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2
	I0927 10:47:04.560618    5676 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:04.560630    5676 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:04.560657    5676 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:5c:a9:67:46:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2
	I0927 10:47:04.562213    5676 main.go:141] libmachine: STDOUT: 
	I0927 10:47:04.562225    5676 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:04.562245    5676 client.go:171] duration metric: took 279.773709ms to LocalClient.Create
	I0927 10:47:06.564409    5676 start.go:128] duration metric: took 2.306402542s to createHost
	I0927 10:47:06.564486    5676 start.go:83] releasing machines lock for "calico-770000", held for 2.306550208s
	W0927 10:47:06.564609    5676 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:06.571126    5676 out.go:177] * Deleting "calico-770000" in qemu2 ...
	W0927 10:47:06.608117    5676 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:06.608141    5676 start.go:729] Will try again in 5 seconds ...
	I0927 10:47:11.609893    5676 start.go:360] acquireMachinesLock for calico-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:11.610467    5676 start.go:364] duration metric: took 465.667µs to acquireMachinesLock for "calico-770000"
	I0927 10:47:11.610542    5676 start.go:93] Provisioning new machine with config: &{Name:calico-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:11.610891    5676 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:11.614005    5676 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:11.664843    5676 start.go:159] libmachine.API.Create for "calico-770000" (driver="qemu2")
	I0927 10:47:11.664898    5676 client.go:168] LocalClient.Create starting
	I0927 10:47:11.665007    5676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:11.665065    5676 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:11.665083    5676 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:11.665141    5676 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:11.665193    5676 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:11.665204    5676 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:11.665745    5676 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:11.834179    5676 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:11.873027    5676 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:11.873037    5676 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:11.873238    5676 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2
	I0927 10:47:11.882439    5676 main.go:141] libmachine: STDOUT: 
	I0927 10:47:11.882461    5676 main.go:141] libmachine: STDERR: 
	I0927 10:47:11.882513    5676 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2 +20000M
	I0927 10:47:11.890650    5676 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:11.890666    5676 main.go:141] libmachine: STDERR: 
	I0927 10:47:11.890683    5676 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2
	I0927 10:47:11.890689    5676 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:11.890697    5676 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:11.890728    5676 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:2e:71:c3:80:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/calico-770000/disk.qcow2
	I0927 10:47:11.892384    5676 main.go:141] libmachine: STDOUT: 
	I0927 10:47:11.892399    5676 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:11.892411    5676 client.go:171] duration metric: took 227.626667ms to LocalClient.Create
	I0927 10:47:13.893614    5676 start.go:128] duration metric: took 2.283824417s to createHost
	I0927 10:47:13.893677    5676 start.go:83] releasing machines lock for "calico-770000", held for 2.284337167s
	W0927 10:47:13.893967    5676 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:13.906764    5676 out.go:201] 
	W0927 10:47:13.910905    5676 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:47:13.910979    5676 out.go:270] * 
	* 
	W0927 10:47:13.912978    5676 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:47:13.923828    5676 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.879477833s)

                                                
                                                
-- stdout --
	* [custom-flannel-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-770000" primary control-plane node in "custom-flannel-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:47:16.398613    5794 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:47:16.398748    5794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:16.398751    5794 out.go:358] Setting ErrFile to fd 2...
	I0927 10:47:16.398753    5794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:16.398897    5794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:47:16.399918    5794 out.go:352] Setting JSON to false
	I0927 10:47:16.416361    5794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4600,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:47:16.416435    5794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:47:16.423084    5794 out.go:177] * [custom-flannel-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:47:16.429955    5794 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:47:16.430001    5794 notify.go:220] Checking for updates...
	I0927 10:47:16.436988    5794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:47:16.439997    5794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:47:16.441491    5794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:47:16.444939    5794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:47:16.447972    5794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:47:16.451393    5794 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:47:16.451466    5794 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:47:16.451521    5794 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:47:16.455972    5794 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:47:16.462957    5794 start.go:297] selected driver: qemu2
	I0927 10:47:16.462964    5794 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:47:16.462969    5794 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:47:16.465220    5794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:47:16.467975    5794 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:47:16.471051    5794 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:47:16.471074    5794 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0927 10:47:16.471083    5794 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0927 10:47:16.471125    5794 start.go:340] cluster config:
	{Name:custom-flannel-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:47:16.474968    5794 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:47:16.481941    5794 out.go:177] * Starting "custom-flannel-770000" primary control-plane node in "custom-flannel-770000" cluster
	I0927 10:47:16.485952    5794 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:47:16.485966    5794 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:47:16.485971    5794 cache.go:56] Caching tarball of preloaded images
	I0927 10:47:16.486033    5794 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:47:16.486038    5794 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:47:16.486083    5794 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/custom-flannel-770000/config.json ...
	I0927 10:47:16.486094    5794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/custom-flannel-770000/config.json: {Name:mk9c8448194468961fde1d041d9f7ae7263b5206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:47:16.486407    5794 start.go:360] acquireMachinesLock for custom-flannel-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:16.486439    5794 start.go:364] duration metric: took 24.541µs to acquireMachinesLock for "custom-flannel-770000"
	I0927 10:47:16.486449    5794 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:16.486476    5794 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:16.494971    5794 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:16.510763    5794 start.go:159] libmachine.API.Create for "custom-flannel-770000" (driver="qemu2")
	I0927 10:47:16.510788    5794 client.go:168] LocalClient.Create starting
	I0927 10:47:16.510851    5794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:16.510881    5794 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:16.510890    5794 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:16.510929    5794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:16.510958    5794 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:16.510965    5794 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:16.511305    5794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:16.669333    5794 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:16.807306    5794 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:16.807318    5794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:16.807553    5794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2
	I0927 10:47:16.817104    5794 main.go:141] libmachine: STDOUT: 
	I0927 10:47:16.817132    5794 main.go:141] libmachine: STDERR: 
	I0927 10:47:16.817198    5794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2 +20000M
	I0927 10:47:16.825251    5794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:16.825268    5794 main.go:141] libmachine: STDERR: 
	I0927 10:47:16.825288    5794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2
	I0927 10:47:16.825293    5794 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:16.825307    5794 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:16.825331    5794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:a5:5b:08:a5:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2
	I0927 10:47:16.827113    5794 main.go:141] libmachine: STDOUT: 
	I0927 10:47:16.827172    5794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:16.827208    5794 client.go:171] duration metric: took 316.542ms to LocalClient.Create
	I0927 10:47:18.828787    5794 start.go:128] duration metric: took 2.343144s to createHost
	I0927 10:47:18.828946    5794 start.go:83] releasing machines lock for "custom-flannel-770000", held for 2.343373083s
	W0927 10:47:18.829044    5794 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:18.843517    5794 out.go:177] * Deleting "custom-flannel-770000" in qemu2 ...
	W0927 10:47:18.880907    5794 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:18.880934    5794 start.go:729] Will try again in 5 seconds ...
	I0927 10:47:23.881594    5794 start.go:360] acquireMachinesLock for custom-flannel-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:23.881903    5794 start.go:364] duration metric: took 253.333µs to acquireMachinesLock for "custom-flannel-770000"
	I0927 10:47:23.881972    5794 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:23.882101    5794 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:23.893431    5794 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:23.925594    5794 start.go:159] libmachine.API.Create for "custom-flannel-770000" (driver="qemu2")
	I0927 10:47:23.925642    5794 client.go:168] LocalClient.Create starting
	I0927 10:47:23.925744    5794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:23.925811    5794 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:23.925829    5794 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:23.925883    5794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:23.925919    5794 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:23.925930    5794 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:23.926388    5794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:24.090599    5794 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:24.177714    5794 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:24.177725    5794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:24.177939    5794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2
	I0927 10:47:24.187171    5794 main.go:141] libmachine: STDOUT: 
	I0927 10:47:24.187186    5794 main.go:141] libmachine: STDERR: 
	I0927 10:47:24.187259    5794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2 +20000M
	I0927 10:47:24.195475    5794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:24.195492    5794 main.go:141] libmachine: STDERR: 
	I0927 10:47:24.195504    5794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2
	I0927 10:47:24.195510    5794 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:24.195520    5794 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:24.195548    5794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:76:96:14:84:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/custom-flannel-770000/disk.qcow2
	I0927 10:47:24.197326    5794 main.go:141] libmachine: STDOUT: 
	I0927 10:47:24.197341    5794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:24.197353    5794 client.go:171] duration metric: took 271.77525ms to LocalClient.Create
	I0927 10:47:26.199163    5794 start.go:128] duration metric: took 2.317591834s to createHost
	I0927 10:47:26.199246    5794 start.go:83] releasing machines lock for "custom-flannel-770000", held for 2.317888417s
	W0927 10:47:26.199606    5794 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:26.209468    5794 out.go:201] 
	W0927 10:47:26.218453    5794 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:47:26.218505    5794 out.go:270] * 
	* 
	W0927 10:47:26.220863    5794 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:47:26.235359    5794 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.741423375s)

                                                
                                                
-- stdout --
	* [false-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-770000" primary control-plane node in "false-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:47:28.641905    5913 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:47:28.642035    5913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:28.642039    5913 out.go:358] Setting ErrFile to fd 2...
	I0927 10:47:28.642041    5913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:28.642161    5913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:47:28.643444    5913 out.go:352] Setting JSON to false
	I0927 10:47:28.660169    5913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4612,"bootTime":1727454636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:47:28.660243    5913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:47:28.667448    5913 out.go:177] * [false-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:47:28.675281    5913 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:47:28.675334    5913 notify.go:220] Checking for updates...
	I0927 10:47:28.682192    5913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:47:28.685155    5913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:47:28.688157    5913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:47:28.691099    5913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:47:28.694137    5913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:47:28.697552    5913 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:47:28.697616    5913 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:47:28.697669    5913 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:47:28.701172    5913 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:47:28.708219    5913 start.go:297] selected driver: qemu2
	I0927 10:47:28.708225    5913 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:47:28.708231    5913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:47:28.710468    5913 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:47:28.711769    5913 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:47:28.714251    5913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:47:28.714268    5913 cni.go:84] Creating CNI manager for "false"
	I0927 10:47:28.714290    5913 start.go:340] cluster config:
	{Name:false-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:47:28.717803    5913 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:47:28.725145    5913 out.go:177] * Starting "false-770000" primary control-plane node in "false-770000" cluster
	I0927 10:47:28.729152    5913 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:47:28.729167    5913 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:47:28.729173    5913 cache.go:56] Caching tarball of preloaded images
	I0927 10:47:28.729224    5913 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:47:28.729229    5913 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:47:28.729283    5913 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/false-770000/config.json ...
	I0927 10:47:28.729295    5913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/false-770000/config.json: {Name:mkc510dc43b589227b300990e371a4a9272b54d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:47:28.729487    5913 start.go:360] acquireMachinesLock for false-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:28.729516    5913 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "false-770000"
	I0927 10:47:28.729527    5913 start.go:93] Provisioning new machine with config: &{Name:false-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:28.729559    5913 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:28.732188    5913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:28.747587    5913 start.go:159] libmachine.API.Create for "false-770000" (driver="qemu2")
	I0927 10:47:28.747608    5913 client.go:168] LocalClient.Create starting
	I0927 10:47:28.747662    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:28.747691    5913 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:28.747699    5913 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:28.747738    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:28.747762    5913 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:28.747773    5913 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:28.748107    5913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:28.903341    5913 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:28.968592    5913 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:28.968598    5913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:28.968798    5913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2
	I0927 10:47:28.978176    5913 main.go:141] libmachine: STDOUT: 
	I0927 10:47:28.978197    5913 main.go:141] libmachine: STDERR: 
	I0927 10:47:28.978259    5913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2 +20000M
	I0927 10:47:28.986329    5913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:28.986350    5913 main.go:141] libmachine: STDERR: 
	I0927 10:47:28.986364    5913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2
	I0927 10:47:28.986369    5913 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:28.986380    5913 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:28.986412    5913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:89:ec:7d:66:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2
	I0927 10:47:28.988212    5913 main.go:141] libmachine: STDOUT: 
	I0927 10:47:28.988226    5913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:28.988247    5913 client.go:171] duration metric: took 240.680208ms to LocalClient.Create
	I0927 10:47:30.989959    5913 start.go:128] duration metric: took 2.260807375s to createHost
	I0927 10:47:30.989990    5913 start.go:83] releasing machines lock for "false-770000", held for 2.260887208s
	W0927 10:47:30.990035    5913 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:30.994588    5913 out.go:177] * Deleting "false-770000" in qemu2 ...
	W0927 10:47:31.015041    5913 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:31.015051    5913 start.go:729] Will try again in 5 seconds ...
	I0927 10:47:36.016443    5913 start.go:360] acquireMachinesLock for false-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:36.016742    5913 start.go:364] duration metric: took 235.583µs to acquireMachinesLock for "false-770000"
	I0927 10:47:36.016781    5913 start.go:93] Provisioning new machine with config: &{Name:false-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:36.016936    5913 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:36.035495    5913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:36.070207    5913 start.go:159] libmachine.API.Create for "false-770000" (driver="qemu2")
	I0927 10:47:36.070253    5913 client.go:168] LocalClient.Create starting
	I0927 10:47:36.070365    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:36.070421    5913 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:36.070436    5913 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:36.070493    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:36.070550    5913 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:36.070560    5913 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:36.071249    5913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:36.235029    5913 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:36.288370    5913 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:36.288380    5913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:36.288592    5913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2
	I0927 10:47:36.297940    5913 main.go:141] libmachine: STDOUT: 
	I0927 10:47:36.297964    5913 main.go:141] libmachine: STDERR: 
	I0927 10:47:36.298016    5913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2 +20000M
	I0927 10:47:36.306176    5913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:36.306204    5913 main.go:141] libmachine: STDERR: 
	I0927 10:47:36.306217    5913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2
	I0927 10:47:36.306222    5913 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:36.306231    5913 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:36.306259    5913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:76:36:f4:d1:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/false-770000/disk.qcow2
	I0927 10:47:36.307973    5913 main.go:141] libmachine: STDOUT: 
	I0927 10:47:36.307997    5913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:36.308010    5913 client.go:171] duration metric: took 237.782834ms to LocalClient.Create
	I0927 10:47:38.309947    5913 start.go:128] duration metric: took 2.29327275s to createHost
	I0927 10:47:38.310069    5913 start.go:83] releasing machines lock for "false-770000", held for 2.293586917s
	W0927 10:47:38.310351    5913 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:38.323993    5913 out.go:201] 
	W0927 10:47:38.327050    5913 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:47:38.327070    5913 out.go:270] * 
	* 
	W0927 10:47:38.328670    5913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:47:38.341938    5913 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.860764208s)

                                                
                                                
-- stdout --
	* [enable-default-cni-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-770000" primary control-plane node in "enable-default-cni-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:47:40.610690    6022 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:47:40.610841    6022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:40.610847    6022 out.go:358] Setting ErrFile to fd 2...
	I0927 10:47:40.610850    6022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:40.610990    6022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:47:40.612203    6022 out.go:352] Setting JSON to false
	I0927 10:47:40.628923    6022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4624,"bootTime":1727454636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:47:40.628993    6022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:47:40.635929    6022 out.go:177] * [enable-default-cni-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:47:40.645705    6022 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:47:40.645740    6022 notify.go:220] Checking for updates...
	I0927 10:47:40.653620    6022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:47:40.656746    6022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:47:40.659758    6022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:47:40.662744    6022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:47:40.665791    6022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:47:40.669174    6022 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:47:40.669247    6022 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:47:40.669301    6022 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:47:40.673732    6022 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:47:40.680754    6022 start.go:297] selected driver: qemu2
	I0927 10:47:40.680763    6022 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:47:40.680771    6022 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:47:40.683151    6022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:47:40.685736    6022 out.go:177] * Automatically selected the socket_vmnet network
	E0927 10:47:40.688724    6022 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0927 10:47:40.688737    6022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:47:40.688750    6022 cni.go:84] Creating CNI manager for "bridge"
	I0927 10:47:40.688759    6022 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:47:40.688783    6022 start.go:340] cluster config:
	{Name:enable-default-cni-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:47:40.692410    6022 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:47:40.699631    6022 out.go:177] * Starting "enable-default-cni-770000" primary control-plane node in "enable-default-cni-770000" cluster
	I0927 10:47:40.703741    6022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:47:40.703758    6022 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:47:40.703768    6022 cache.go:56] Caching tarball of preloaded images
	I0927 10:47:40.703846    6022 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:47:40.703863    6022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:47:40.703918    6022 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/enable-default-cni-770000/config.json ...
	I0927 10:47:40.703929    6022 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/enable-default-cni-770000/config.json: {Name:mke8f3d8a383b90ea306dcf0ce9d1a93a409b5c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:47:40.704150    6022 start.go:360] acquireMachinesLock for enable-default-cni-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:40.704182    6022 start.go:364] duration metric: took 25.208µs to acquireMachinesLock for "enable-default-cni-770000"
	I0927 10:47:40.704193    6022 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:40.704222    6022 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:40.712716    6022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:40.728068    6022 start.go:159] libmachine.API.Create for "enable-default-cni-770000" (driver="qemu2")
	I0927 10:47:40.728098    6022 client.go:168] LocalClient.Create starting
	I0927 10:47:40.728163    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:40.728200    6022 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:40.728209    6022 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:40.728251    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:40.728274    6022 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:40.728280    6022 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:40.728632    6022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:40.886253    6022 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:40.968880    6022 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:40.968886    6022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:40.969086    6022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2
	I0927 10:47:40.978426    6022 main.go:141] libmachine: STDOUT: 
	I0927 10:47:40.978445    6022 main.go:141] libmachine: STDERR: 
	I0927 10:47:40.978506    6022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2 +20000M
	I0927 10:47:40.986515    6022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:40.986528    6022 main.go:141] libmachine: STDERR: 
	I0927 10:47:40.986538    6022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2
	I0927 10:47:40.986546    6022 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:40.986557    6022 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:40.986584    6022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ed:cf:64:21:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2
	I0927 10:47:40.988207    6022 main.go:141] libmachine: STDOUT: 
	I0927 10:47:40.988224    6022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:40.988243    6022 client.go:171] duration metric: took 260.167416ms to LocalClient.Create
	I0927 10:47:42.990462    6022 start.go:128] duration metric: took 2.286401375s to createHost
	I0927 10:47:42.990564    6022 start.go:83] releasing machines lock for "enable-default-cni-770000", held for 2.286601209s
	W0927 10:47:42.990636    6022 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:43.009907    6022 out.go:177] * Deleting "enable-default-cni-770000" in qemu2 ...
	W0927 10:47:43.045395    6022 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:43.045423    6022 start.go:729] Will try again in 5 seconds ...
	I0927 10:47:48.047239    6022 start.go:360] acquireMachinesLock for enable-default-cni-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:48.047863    6022 start.go:364] duration metric: took 489.625µs to acquireMachinesLock for "enable-default-cni-770000"
	I0927 10:47:48.047937    6022 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:48.048207    6022 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:48.055035    6022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:48.105068    6022 start.go:159] libmachine.API.Create for "enable-default-cni-770000" (driver="qemu2")
	I0927 10:47:48.105158    6022 client.go:168] LocalClient.Create starting
	I0927 10:47:48.105367    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:48.105496    6022 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:48.105514    6022 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:48.105584    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:48.105632    6022 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:48.105645    6022 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:48.106328    6022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:48.272578    6022 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:48.373546    6022 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:48.373555    6022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:48.373780    6022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2
	I0927 10:47:48.383186    6022 main.go:141] libmachine: STDOUT: 
	I0927 10:47:48.383314    6022 main.go:141] libmachine: STDERR: 
	I0927 10:47:48.383377    6022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2 +20000M
	I0927 10:47:48.391342    6022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:48.391396    6022 main.go:141] libmachine: STDERR: 
	I0927 10:47:48.391414    6022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2
	I0927 10:47:48.391435    6022 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:48.391446    6022 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:48.391474    6022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:4c:c4:5a:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/enable-default-cni-770000/disk.qcow2
	I0927 10:47:48.393202    6022 main.go:141] libmachine: STDOUT: 
	I0927 10:47:48.393221    6022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:48.393240    6022 client.go:171] duration metric: took 288.074458ms to LocalClient.Create
	I0927 10:47:50.395411    6022 start.go:128] duration metric: took 2.347318334s to createHost
	I0927 10:47:50.395613    6022 start.go:83] releasing machines lock for "enable-default-cni-770000", held for 2.347894292s
	W0927 10:47:50.395961    6022 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:50.406521    6022 out.go:201] 
	W0927 10:47:50.416485    6022 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:47:50.416532    6022 out.go:270] * 
	* 
	W0927 10:47:50.419687    6022 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:47:50.429472    6022 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.802406625s)

                                                
                                                
-- stdout --
	* [flannel-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-770000" primary control-plane node in "flannel-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:47:52.658603    6135 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:47:52.658752    6135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:52.658756    6135 out.go:358] Setting ErrFile to fd 2...
	I0927 10:47:52.658758    6135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:47:52.658888    6135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:47:52.659978    6135 out.go:352] Setting JSON to false
	I0927 10:47:52.676374    6135 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4636,"bootTime":1727454636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:47:52.676456    6135 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:47:52.682105    6135 out.go:177] * [flannel-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:47:52.690947    6135 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:47:52.691029    6135 notify.go:220] Checking for updates...
	I0927 10:47:52.695461    6135 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:47:52.698887    6135 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:47:52.701883    6135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:47:52.704886    6135 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:47:52.707800    6135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:47:52.711268    6135 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:47:52.711333    6135 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:47:52.711383    6135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:47:52.715869    6135 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:47:52.722882    6135 start.go:297] selected driver: qemu2
	I0927 10:47:52.722887    6135 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:47:52.722895    6135 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:47:52.725237    6135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:47:52.728897    6135 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:47:52.731880    6135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:47:52.731898    6135 cni.go:84] Creating CNI manager for "flannel"
	I0927 10:47:52.731901    6135 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0927 10:47:52.731926    6135 start.go:340] cluster config:
	{Name:flannel-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:47:52.735679    6135 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:47:52.743727    6135 out.go:177] * Starting "flannel-770000" primary control-plane node in "flannel-770000" cluster
	I0927 10:47:52.747794    6135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:47:52.747807    6135 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:47:52.747814    6135 cache.go:56] Caching tarball of preloaded images
	I0927 10:47:52.747870    6135 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:47:52.747876    6135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:47:52.747930    6135 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/flannel-770000/config.json ...
	I0927 10:47:52.747940    6135 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/flannel-770000/config.json: {Name:mk02bc8a40451b80d92fa8d5495e2c1e83a5f70b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:47:52.748160    6135 start.go:360] acquireMachinesLock for flannel-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:47:52.748192    6135 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "flannel-770000"
	I0927 10:47:52.748205    6135 start.go:93] Provisioning new machine with config: &{Name:flannel-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:47:52.748229    6135 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:47:52.755842    6135 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:47:52.771913    6135 start.go:159] libmachine.API.Create for "flannel-770000" (driver="qemu2")
	I0927 10:47:52.771938    6135 client.go:168] LocalClient.Create starting
	I0927 10:47:52.772003    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:47:52.772040    6135 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:52.772049    6135 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:52.772084    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:47:52.772112    6135 main.go:141] libmachine: Decoding PEM data...
	I0927 10:47:52.772121    6135 main.go:141] libmachine: Parsing certificate...
	I0927 10:47:52.772468    6135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:47:52.931543    6135 main.go:141] libmachine: Creating SSH key...
	I0927 10:47:52.980972    6135 main.go:141] libmachine: Creating Disk image...
	I0927 10:47:52.980978    6135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:47:52.981192    6135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2
	I0927 10:47:52.990494    6135 main.go:141] libmachine: STDOUT: 
	I0927 10:47:52.990516    6135 main.go:141] libmachine: STDERR: 
	I0927 10:47:52.990584    6135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2 +20000M
	I0927 10:47:52.998475    6135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:47:52.998491    6135 main.go:141] libmachine: STDERR: 
	I0927 10:47:52.998508    6135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2
	I0927 10:47:52.998513    6135 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:47:52.998524    6135 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:47:52.998555    6135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ed:e9:97:fe:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2
	I0927 10:47:53.000265    6135 main.go:141] libmachine: STDOUT: 
	I0927 10:47:53.000282    6135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:47:53.000310    6135 client.go:171] duration metric: took 228.379833ms to LocalClient.Create
	I0927 10:47:55.002427    6135 start.go:128] duration metric: took 2.254303084s to createHost
	I0927 10:47:55.002504    6135 start.go:83] releasing machines lock for "flannel-770000", held for 2.254439583s
	W0927 10:47:55.002577    6135 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:55.021848    6135 out.go:177] * Deleting "flannel-770000" in qemu2 ...
	W0927 10:47:55.051673    6135 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:47:55.051699    6135 start.go:729] Will try again in 5 seconds ...
	I0927 10:48:00.053710    6135 start.go:360] acquireMachinesLock for flannel-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:00.054254    6135 start.go:364] duration metric: took 416.375µs to acquireMachinesLock for "flannel-770000"
	I0927 10:48:00.054380    6135 start.go:93] Provisioning new machine with config: &{Name:flannel-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:00.054574    6135 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:00.065179    6135 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:48:00.095225    6135 start.go:159] libmachine.API.Create for "flannel-770000" (driver="qemu2")
	I0927 10:48:00.095279    6135 client.go:168] LocalClient.Create starting
	I0927 10:48:00.095399    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:00.095461    6135 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:00.095478    6135 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:00.095534    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:00.095578    6135 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:00.095590    6135 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:00.096033    6135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:00.265896    6135 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:00.370336    6135 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:00.370343    6135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:00.370541    6135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2
	I0927 10:48:00.380108    6135 main.go:141] libmachine: STDOUT: 
	I0927 10:48:00.380132    6135 main.go:141] libmachine: STDERR: 
	I0927 10:48:00.380206    6135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2 +20000M
	I0927 10:48:00.388202    6135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:00.388220    6135 main.go:141] libmachine: STDERR: 
	I0927 10:48:00.388244    6135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2
	I0927 10:48:00.388252    6135 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:00.388261    6135 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:00.388291    6135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:cb:ef:0c:7a:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/flannel-770000/disk.qcow2
	I0927 10:48:00.389955    6135 main.go:141] libmachine: STDOUT: 
	I0927 10:48:00.389969    6135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:00.389983    6135 client.go:171] duration metric: took 294.713833ms to LocalClient.Create
	I0927 10:48:02.390022    6135 start.go:128] duration metric: took 2.335527334s to createHost
	I0927 10:48:02.390029    6135 start.go:83] releasing machines lock for "flannel-770000", held for 2.335868167s
	W0927 10:48:02.390104    6135 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:02.402595    6135 out.go:201] 
	W0927 10:48:02.409664    6135 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:02.409675    6135 out.go:270] * 
	* 
	W0927 10:48:02.410198    6135 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:02.420565    6135 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.961963416s)

                                                
                                                
-- stdout --
	* [bridge-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-770000" primary control-plane node in "bridge-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:04.807896    6252 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:04.808035    6252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:04.808038    6252 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:04.808040    6252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:04.808158    6252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:04.809228    6252 out.go:352] Setting JSON to false
	I0927 10:48:04.825443    6252 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4648,"bootTime":1727454636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:48:04.825514    6252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:48:04.833070    6252 out.go:177] * [bridge-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:48:04.844024    6252 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:48:04.844072    6252 notify.go:220] Checking for updates...
	I0927 10:48:04.850949    6252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:48:04.853938    6252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:48:04.855113    6252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:48:04.862002    6252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:48:04.864915    6252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:48:04.868384    6252 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:48:04.868445    6252 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:48:04.868503    6252 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:48:04.873050    6252 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:48:04.879995    6252 start.go:297] selected driver: qemu2
	I0927 10:48:04.880002    6252 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:48:04.880009    6252 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:48:04.882338    6252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:48:04.884956    6252 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:48:04.887989    6252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:48:04.888005    6252 cni.go:84] Creating CNI manager for "bridge"
	I0927 10:48:04.888009    6252 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:48:04.888032    6252 start.go:340] cluster config:
	{Name:bridge-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:04.891650    6252 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:04.898945    6252 out.go:177] * Starting "bridge-770000" primary control-plane node in "bridge-770000" cluster
	I0927 10:48:04.904872    6252 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:48:04.904885    6252 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:48:04.904891    6252 cache.go:56] Caching tarball of preloaded images
	I0927 10:48:04.904944    6252 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:48:04.904950    6252 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:48:04.905003    6252 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/bridge-770000/config.json ...
	I0927 10:48:04.905014    6252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/bridge-770000/config.json: {Name:mk670edcb1b7aebddeda3ed5e7f2ec7940499c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:48:04.905280    6252 start.go:360] acquireMachinesLock for bridge-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:04.905311    6252 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "bridge-770000"
	I0927 10:48:04.905326    6252 start.go:93] Provisioning new machine with config: &{Name:bridge-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:04.905352    6252 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:04.909016    6252 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:48:04.925911    6252 start.go:159] libmachine.API.Create for "bridge-770000" (driver="qemu2")
	I0927 10:48:04.925939    6252 client.go:168] LocalClient.Create starting
	I0927 10:48:04.925997    6252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:04.926026    6252 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:04.926042    6252 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:04.926076    6252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:04.926102    6252 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:04.926113    6252 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:04.926452    6252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:05.085296    6252 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:05.309976    6252 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:05.309986    6252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:05.310587    6252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2
	I0927 10:48:05.320285    6252 main.go:141] libmachine: STDOUT: 
	I0927 10:48:05.320304    6252 main.go:141] libmachine: STDERR: 
	I0927 10:48:05.320376    6252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2 +20000M
	I0927 10:48:05.328397    6252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:05.328417    6252 main.go:141] libmachine: STDERR: 
	I0927 10:48:05.328432    6252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2
	I0927 10:48:05.328437    6252 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:05.328449    6252 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:05.328487    6252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:a0:e1:05:19:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2
	I0927 10:48:05.330140    6252 main.go:141] libmachine: STDOUT: 
	I0927 10:48:05.330154    6252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:05.330175    6252 client.go:171] duration metric: took 404.247834ms to LocalClient.Create
	I0927 10:48:07.332312    6252 start.go:128] duration metric: took 2.427028708s to createHost
	I0927 10:48:07.332431    6252 start.go:83] releasing machines lock for "bridge-770000", held for 2.4272145s
	W0927 10:48:07.332574    6252 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:07.343931    6252 out.go:177] * Deleting "bridge-770000" in qemu2 ...
	W0927 10:48:07.377120    6252 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:07.377143    6252 start.go:729] Will try again in 5 seconds ...
	I0927 10:48:12.379244    6252 start.go:360] acquireMachinesLock for bridge-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:12.379880    6252 start.go:364] duration metric: took 473.5µs to acquireMachinesLock for "bridge-770000"
	I0927 10:48:12.379971    6252 start.go:93] Provisioning new machine with config: &{Name:bridge-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:12.380195    6252 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:12.390819    6252 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:48:12.438131    6252 start.go:159] libmachine.API.Create for "bridge-770000" (driver="qemu2")
	I0927 10:48:12.438182    6252 client.go:168] LocalClient.Create starting
	I0927 10:48:12.438316    6252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:12.438402    6252 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:12.438426    6252 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:12.438493    6252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:12.438544    6252 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:12.438558    6252 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:12.439216    6252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:12.605605    6252 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:12.665855    6252 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:12.665867    6252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:12.666118    6252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2
	I0927 10:48:12.675796    6252 main.go:141] libmachine: STDOUT: 
	I0927 10:48:12.675818    6252 main.go:141] libmachine: STDERR: 
	I0927 10:48:12.675892    6252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2 +20000M
	I0927 10:48:12.684872    6252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:12.684902    6252 main.go:141] libmachine: STDERR: 
	I0927 10:48:12.684914    6252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2
	I0927 10:48:12.684920    6252 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:12.684929    6252 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:12.684956    6252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:85:5b:56:a2:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/bridge-770000/disk.qcow2
	I0927 10:48:12.687119    6252 main.go:141] libmachine: STDOUT: 
	I0927 10:48:12.687138    6252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:12.687151    6252 client.go:171] duration metric: took 248.970459ms to LocalClient.Create
	I0927 10:48:14.689201    6252 start.go:128] duration metric: took 2.309063s to createHost
	I0927 10:48:14.689253    6252 start.go:83] releasing machines lock for "bridge-770000", held for 2.309424292s
	W0927 10:48:14.689467    6252 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:14.708819    6252 out.go:201] 
	W0927 10:48:14.713694    6252 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:14.713728    6252 out.go:270] * 
	* 
	W0927 10:48:14.714741    6252 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:14.730661    6252 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-770000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.964640875s)

                                                
                                                
-- stdout --
	* [kubenet-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-770000" primary control-plane node in "kubenet-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:16.898239    6367 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:16.898352    6367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:16.898356    6367 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:16.898359    6367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:16.898492    6367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:16.899571    6367 out.go:352] Setting JSON to false
	I0927 10:48:16.915879    6367 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4660,"bootTime":1727454636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:48:16.915947    6367 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:48:16.923230    6367 out.go:177] * [kubenet-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:48:16.931000    6367 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:48:16.931050    6367 notify.go:220] Checking for updates...
	I0927 10:48:16.935920    6367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:48:16.938978    6367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:48:16.942004    6367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:48:16.945027    6367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:48:16.947959    6367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:48:16.951325    6367 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:48:16.951387    6367 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:48:16.951435    6367 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:48:16.955886    6367 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:48:16.962976    6367 start.go:297] selected driver: qemu2
	I0927 10:48:16.962983    6367 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:48:16.962988    6367 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:48:16.965212    6367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:48:16.969012    6367 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:48:16.972012    6367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:48:16.972026    6367 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0927 10:48:16.972054    6367 start.go:340] cluster config:
	{Name:kubenet-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:16.975449    6367 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:16.980974    6367 out.go:177] * Starting "kubenet-770000" primary control-plane node in "kubenet-770000" cluster
	I0927 10:48:16.984972    6367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:48:16.984984    6367 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:48:16.984989    6367 cache.go:56] Caching tarball of preloaded images
	I0927 10:48:16.985041    6367 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:48:16.985046    6367 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:48:16.985110    6367 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/kubenet-770000/config.json ...
	I0927 10:48:16.985120    6367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/kubenet-770000/config.json: {Name:mkbc2d7df11e34399592b2e6aba1c06d08de1fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:48:16.985324    6367 start.go:360] acquireMachinesLock for kubenet-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:16.985353    6367 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "kubenet-770000"
	I0927 10:48:16.985364    6367 start.go:93] Provisioning new machine with config: &{Name:kubenet-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:16.985390    6367 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:16.993953    6367 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:48:17.009103    6367 start.go:159] libmachine.API.Create for "kubenet-770000" (driver="qemu2")
	I0927 10:48:17.009128    6367 client.go:168] LocalClient.Create starting
	I0927 10:48:17.009190    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:17.009221    6367 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:17.009231    6367 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:17.009276    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:17.009298    6367 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:17.009304    6367 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:17.009622    6367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:17.206729    6367 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:17.308769    6367 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:17.308774    6367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:17.308967    6367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2
	I0927 10:48:17.318351    6367 main.go:141] libmachine: STDOUT: 
	I0927 10:48:17.318366    6367 main.go:141] libmachine: STDERR: 
	I0927 10:48:17.318427    6367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2 +20000M
	I0927 10:48:17.326288    6367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:17.326304    6367 main.go:141] libmachine: STDERR: 
	I0927 10:48:17.326318    6367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2
	I0927 10:48:17.326321    6367 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:17.326335    6367 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:17.326365    6367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:21:3d:fd:c4:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2
	I0927 10:48:17.328032    6367 main.go:141] libmachine: STDOUT: 
	I0927 10:48:17.328047    6367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:17.328068    6367 client.go:171] duration metric: took 318.946917ms to LocalClient.Create
	I0927 10:48:19.330138    6367 start.go:128] duration metric: took 2.344811417s to createHost
	I0927 10:48:19.330177    6367 start.go:83] releasing machines lock for "kubenet-770000", held for 2.344899s
	W0927 10:48:19.330240    6367 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:19.347795    6367 out.go:177] * Deleting "kubenet-770000" in qemu2 ...
	W0927 10:48:19.375616    6367 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:19.375634    6367 start.go:729] Will try again in 5 seconds ...
	I0927 10:48:24.377817    6367 start.go:360] acquireMachinesLock for kubenet-770000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:24.378455    6367 start.go:364] duration metric: took 477.917µs to acquireMachinesLock for "kubenet-770000"
	I0927 10:48:24.378642    6367 start.go:93] Provisioning new machine with config: &{Name:kubenet-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:24.378919    6367 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:24.390459    6367 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 10:48:24.436131    6367 start.go:159] libmachine.API.Create for "kubenet-770000" (driver="qemu2")
	I0927 10:48:24.436198    6367 client.go:168] LocalClient.Create starting
	I0927 10:48:24.436349    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:24.436421    6367 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:24.436441    6367 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:24.436495    6367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:24.436541    6367 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:24.436553    6367 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:24.437234    6367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:24.604034    6367 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:24.770668    6367 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:24.770678    6367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:24.770887    6367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2
	I0927 10:48:24.780460    6367 main.go:141] libmachine: STDOUT: 
	I0927 10:48:24.780476    6367 main.go:141] libmachine: STDERR: 
	I0927 10:48:24.780549    6367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2 +20000M
	I0927 10:48:24.788577    6367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:24.788594    6367 main.go:141] libmachine: STDERR: 
	I0927 10:48:24.788608    6367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2
	I0927 10:48:24.788627    6367 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:24.788635    6367 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:24.788675    6367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:9f:df:fd:a6:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/kubenet-770000/disk.qcow2
	I0927 10:48:24.790423    6367 main.go:141] libmachine: STDOUT: 
	I0927 10:48:24.790436    6367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:24.790452    6367 client.go:171] duration metric: took 354.260875ms to LocalClient.Create
	I0927 10:48:26.792466    6367 start.go:128] duration metric: took 2.4136115s to createHost
	I0927 10:48:26.792494    6367 start.go:83] releasing machines lock for "kubenet-770000", held for 2.414074916s
	W0927 10:48:26.792593    6367 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:26.800915    6367 out.go:201] 
	W0927 10:48:26.805115    6367 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:26.805122    6367 out.go:270] * 
	* 
	W0927 10:48:26.805658    6367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:26.817029    6367 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.738258083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-011000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-011000" primary control-plane node in "old-k8s-version-011000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-011000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:29.054954    6483 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:29.055077    6483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:29.055080    6483 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:29.055082    6483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:29.055209    6483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:29.056302    6483 out.go:352] Setting JSON to false
	I0927 10:48:29.073680    6483 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4673,"bootTime":1727454636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:48:29.073787    6483 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:48:29.081063    6483 out.go:177] * [old-k8s-version-011000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:48:29.088879    6483 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:48:29.088898    6483 notify.go:220] Checking for updates...
	I0927 10:48:29.096791    6483 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:48:29.103788    6483 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:48:29.111844    6483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:48:29.114795    6483 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:48:29.117824    6483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:48:29.122138    6483 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:48:29.122205    6483 config.go:182] Loaded profile config "stopped-upgrade-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0927 10:48:29.122250    6483 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:48:29.125832    6483 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:48:29.132821    6483 start.go:297] selected driver: qemu2
	I0927 10:48:29.132826    6483 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:48:29.132831    6483 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:48:29.135190    6483 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:48:29.138933    6483 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:48:29.141973    6483 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:48:29.142004    6483 cni.go:84] Creating CNI manager for ""
	I0927 10:48:29.142026    6483 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 10:48:29.142047    6483 start.go:340] cluster config:
	{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:29.145884    6483 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:29.149852    6483 out.go:177] * Starting "old-k8s-version-011000" primary control-plane node in "old-k8s-version-011000" cluster
	I0927 10:48:29.153873    6483 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 10:48:29.153889    6483 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 10:48:29.153901    6483 cache.go:56] Caching tarball of preloaded images
	I0927 10:48:29.153974    6483 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:48:29.153980    6483 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0927 10:48:29.154053    6483 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/old-k8s-version-011000/config.json ...
	I0927 10:48:29.154065    6483 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/old-k8s-version-011000/config.json: {Name:mke5c0deae6b95e23e5249a7ffe7ad96900062a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:48:29.154266    6483 start.go:360] acquireMachinesLock for old-k8s-version-011000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:29.154299    6483 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "old-k8s-version-011000"
	I0927 10:48:29.154314    6483 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:29.154339    6483 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:29.161848    6483 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:48:29.178024    6483 start.go:159] libmachine.API.Create for "old-k8s-version-011000" (driver="qemu2")
	I0927 10:48:29.178058    6483 client.go:168] LocalClient.Create starting
	I0927 10:48:29.178119    6483 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:29.178153    6483 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:29.178164    6483 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:29.178200    6483 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:29.178222    6483 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:29.178229    6483 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:29.178564    6483 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:29.335577    6483 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:29.372688    6483 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:29.372693    6483 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:29.372935    6483 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:29.382452    6483 main.go:141] libmachine: STDOUT: 
	I0927 10:48:29.382470    6483 main.go:141] libmachine: STDERR: 
	I0927 10:48:29.382547    6483 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2 +20000M
	I0927 10:48:29.390612    6483 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:29.390636    6483 main.go:141] libmachine: STDERR: 
	I0927 10:48:29.390650    6483 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:29.390656    6483 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:29.390665    6483 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:29.390691    6483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:54:c2:a9:88:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:29.392467    6483 main.go:141] libmachine: STDOUT: 
	I0927 10:48:29.392486    6483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:29.392506    6483 client.go:171] duration metric: took 214.447625ms to LocalClient.Create
	I0927 10:48:31.394697    6483 start.go:128] duration metric: took 2.240394208s to createHost
	I0927 10:48:31.394791    6483 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 2.24055175s
	W0927 10:48:31.394853    6483 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:31.414308    6483 out.go:177] * Deleting "old-k8s-version-011000" in qemu2 ...
	W0927 10:48:31.443088    6483 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:31.443114    6483 start.go:729] Will try again in 5 seconds ...
	I0927 10:48:36.445221    6483 start.go:360] acquireMachinesLock for old-k8s-version-011000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:36.445718    6483 start.go:364] duration metric: took 403.417µs to acquireMachinesLock for "old-k8s-version-011000"
	I0927 10:48:36.445852    6483 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:36.446072    6483 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:36.456562    6483 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:48:36.501446    6483 start.go:159] libmachine.API.Create for "old-k8s-version-011000" (driver="qemu2")
	I0927 10:48:36.501494    6483 client.go:168] LocalClient.Create starting
	I0927 10:48:36.501634    6483 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:36.501707    6483 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:36.501726    6483 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:36.501784    6483 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:36.501846    6483 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:36.501863    6483 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:36.502399    6483 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:36.669070    6483 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:36.702354    6483 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:36.702360    6483 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:36.702567    6483 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:36.711882    6483 main.go:141] libmachine: STDOUT: 
	I0927 10:48:36.711906    6483 main.go:141] libmachine: STDERR: 
	I0927 10:48:36.711971    6483 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2 +20000M
	I0927 10:48:36.720028    6483 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:36.720043    6483 main.go:141] libmachine: STDERR: 
	I0927 10:48:36.720052    6483 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:36.720056    6483 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:36.720065    6483 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:36.720089    6483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a8:b1:31:45:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:36.721815    6483 main.go:141] libmachine: STDOUT: 
	I0927 10:48:36.721833    6483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:36.721845    6483 client.go:171] duration metric: took 220.351459ms to LocalClient.Create
	I0927 10:48:38.722229    6483 start.go:128] duration metric: took 2.276210375s to createHost
	I0927 10:48:38.722281    6483 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 2.276606125s
	W0927 10:48:38.722406    6483 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:38.730338    6483 out.go:201] 
	W0927 10:48:38.740418    6483 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:38.740427    6483 out.go:270] * 
	* 
	W0927 10:48:38.741174    6483 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:38.755278    6483 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (40.517792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-011000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-011000 create -f testdata/busybox.yaml: exit status 1 (26.111875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-011000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (29.995875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (30.52475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-011000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-011000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-011000 describe deploy/metrics-server -n kube-system: exit status 1 (27.547ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-011000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (28.98425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.197583375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-011000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-011000" primary control-plane node in "old-k8s-version-011000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:42.046656    6533 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:42.046792    6533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:42.046796    6533 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:42.046798    6533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:42.046937    6533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:42.048007    6533 out.go:352] Setting JSON to false
	I0927 10:48:42.064287    6533 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4686,"bootTime":1727454636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:48:42.064360    6533 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:48:42.067915    6533 out.go:177] * [old-k8s-version-011000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:48:42.074967    6533 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:48:42.075056    6533 notify.go:220] Checking for updates...
	I0927 10:48:42.080880    6533 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:48:42.083913    6533 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:48:42.086809    6533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:48:42.089848    6533 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:48:42.092862    6533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:48:42.096270    6533 config.go:182] Loaded profile config "old-k8s-version-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0927 10:48:42.099877    6533 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 10:48:42.102820    6533 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:48:42.106841    6533 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:48:42.113921    6533 start.go:297] selected driver: qemu2
	I0927 10:48:42.113932    6533 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:42.113985    6533 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:48:42.116117    6533 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:48:42.116142    6533 cni.go:84] Creating CNI manager for ""
	I0927 10:48:42.116165    6533 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 10:48:42.116194    6533 start.go:340] cluster config:
	{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:42.119535    6533 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:42.126874    6533 out.go:177] * Starting "old-k8s-version-011000" primary control-plane node in "old-k8s-version-011000" cluster
	I0927 10:48:42.129840    6533 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 10:48:42.129856    6533 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 10:48:42.129866    6533 cache.go:56] Caching tarball of preloaded images
	I0927 10:48:42.129916    6533 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:48:42.129921    6533 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0927 10:48:42.129967    6533 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/old-k8s-version-011000/config.json ...
	I0927 10:48:42.130510    6533 start.go:360] acquireMachinesLock for old-k8s-version-011000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:42.130541    6533 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "old-k8s-version-011000"
	I0927 10:48:42.130549    6533 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:48:42.130554    6533 fix.go:54] fixHost starting: 
	I0927 10:48:42.130663    6533 fix.go:112] recreateIfNeeded on old-k8s-version-011000: state=Stopped err=<nil>
	W0927 10:48:42.130672    6533 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:48:42.134905    6533 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	I0927 10:48:42.141856    6533 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:42.141898    6533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a8:b1:31:45:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:42.143760    6533 main.go:141] libmachine: STDOUT: 
	I0927 10:48:42.143789    6533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:42.143820    6533 fix.go:56] duration metric: took 13.266542ms for fixHost
	I0927 10:48:42.143825    6533 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 13.280542ms
	W0927 10:48:42.143831    6533 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:42.143864    6533 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:42.143868    6533 start.go:729] Will try again in 5 seconds ...
	I0927 10:48:47.144159    6533 start.go:360] acquireMachinesLock for old-k8s-version-011000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:47.144632    6533 start.go:364] duration metric: took 382.958µs to acquireMachinesLock for "old-k8s-version-011000"
	I0927 10:48:47.144776    6533 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:48:47.144796    6533 fix.go:54] fixHost starting: 
	I0927 10:48:47.145617    6533 fix.go:112] recreateIfNeeded on old-k8s-version-011000: state=Stopped err=<nil>
	W0927 10:48:47.145641    6533 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:48:47.150231    6533 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	I0927 10:48:47.169878    6533 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:47.170100    6533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a8:b1:31:45:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0927 10:48:47.178668    6533 main.go:141] libmachine: STDOUT: 
	I0927 10:48:47.178726    6533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:47.178824    6533 fix.go:56] duration metric: took 34.030458ms for fixHost
	I0927 10:48:47.178846    6533 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 34.193291ms
	W0927 10:48:47.179057    6533 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:47.187890    6533 out.go:201] 
	W0927 10:48:47.192215    6533 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:47.192238    6533 out.go:270] * 
	* 
	W0927 10:48:47.194094    6533 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:47.205075    6533 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (65.950666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-684000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-684000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.137336541s)

                                                
                                                
-- stdout --
	* [no-preload-684000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-684000" primary control-plane node in "no-preload-684000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-684000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:43.316050    6543 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:43.316180    6543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:43.316183    6543 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:43.316186    6543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:43.316315    6543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:43.317356    6543 out.go:352] Setting JSON to false
	I0927 10:48:43.333522    6543 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4687,"bootTime":1727454636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:48:43.333594    6543 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:48:43.338040    6543 out.go:177] * [no-preload-684000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:48:43.347966    6543 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:48:43.348004    6543 notify.go:220] Checking for updates...
	I0927 10:48:43.355008    6543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:48:43.357898    6543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:48:43.360977    6543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:48:43.364005    6543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:48:43.366989    6543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:48:43.370302    6543 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:48:43.370377    6543 config.go:182] Loaded profile config "old-k8s-version-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0927 10:48:43.370420    6543 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:48:43.374973    6543 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:48:43.381964    6543 start.go:297] selected driver: qemu2
	I0927 10:48:43.381971    6543 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:48:43.381977    6543 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:48:43.384323    6543 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:48:43.387015    6543 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:48:43.390035    6543 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:48:43.390056    6543 cni.go:84] Creating CNI manager for ""
	I0927 10:48:43.390087    6543 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:48:43.390098    6543 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:48:43.390127    6543 start.go:340] cluster config:
	{Name:no-preload-684000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:43.393902    6543 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.400974    6543 out.go:177] * Starting "no-preload-684000" primary control-plane node in "no-preload-684000" cluster
	I0927 10:48:43.404890    6543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:48:43.404984    6543 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/no-preload-684000/config.json ...
	I0927 10:48:43.405014    6543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/no-preload-684000/config.json: {Name:mk9b0824353edf39de91a68395c9f4966a76f08a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:48:43.405027    6543 cache.go:107] acquiring lock: {Name:mk865a669a22d5796e2286bf0736b1694aa96165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405025    6543 cache.go:107] acquiring lock: {Name:mk24e47a2323d63a00172a2ddaddb4a1994e7650 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405028    6543 cache.go:107] acquiring lock: {Name:mkf48093fa971191f71c46f781d51b0e356458e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405056    6543 cache.go:107] acquiring lock: {Name:mka49bbed88dac9760198ae1cf1bf744cd6a8f6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405260    6543 cache.go:107] acquiring lock: {Name:mk04dfbba0912ab743b61e0c2651ed303f57fbf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405268    6543 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0927 10:48:43.405280    6543 cache.go:107] acquiring lock: {Name:mkda553429935654ab1b40dd60951d922c0a511b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405282    6543 cache.go:107] acquiring lock: {Name:mkd789685227a83ec0fe6bd238a1915d538e340b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405314    6543 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 10:48:43.405307    6543 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 253.042µs
	I0927 10:48:43.405318    6543 cache.go:107] acquiring lock: {Name:mk8686aeb51b4f49a20a79af2042c07c2f411e73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:43.405365    6543 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 10:48:43.405366    6543 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0927 10:48:43.405269    6543 start.go:360] acquireMachinesLock for no-preload-684000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:43.405377    6543 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0927 10:48:43.405378    6543 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 10:48:43.405314    6543 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0927 10:48:43.405506    6543 start.go:364] duration metric: took 127µs to acquireMachinesLock for "no-preload-684000"
	I0927 10:48:43.405549    6543 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 10:48:43.405526    6543 start.go:93] Provisioning new machine with config: &{Name:no-preload-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:43.405581    6543 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:43.405618    6543 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 10:48:43.412937    6543 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:48:43.416686    6543 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0927 10:48:43.418481    6543 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0927 10:48:43.418632    6543 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0927 10:48:43.418666    6543 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0927 10:48:43.419285    6543 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0927 10:48:43.419340    6543 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0927 10:48:43.419378    6543 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0927 10:48:43.431512    6543 start.go:159] libmachine.API.Create for "no-preload-684000" (driver="qemu2")
	I0927 10:48:43.431537    6543 client.go:168] LocalClient.Create starting
	I0927 10:48:43.431608    6543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:43.431654    6543 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:43.431664    6543 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:43.431710    6543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:43.431735    6543 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:43.431746    6543 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:43.432137    6543 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:43.595675    6543 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:43.732056    6543 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:43.732078    6543 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:43.732264    6543 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:48:43.741309    6543 main.go:141] libmachine: STDOUT: 
	I0927 10:48:43.741325    6543 main.go:141] libmachine: STDERR: 
	I0927 10:48:43.741375    6543 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2 +20000M
	I0927 10:48:43.749491    6543 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:43.749505    6543 main.go:141] libmachine: STDERR: 
	I0927 10:48:43.749516    6543 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:48:43.749521    6543 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:43.749533    6543 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:43.749558    6543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7d:0a:95:65:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:48:43.751780    6543 main.go:141] libmachine: STDOUT: 
	I0927 10:48:43.751802    6543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:43.751827    6543 client.go:171] duration metric: took 320.291458ms to LocalClient.Create
	I0927 10:48:43.837285    6543 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0927 10:48:43.849885    6543 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0927 10:48:43.868628    6543 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0927 10:48:43.871957    6543 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0927 10:48:43.878093    6543 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0927 10:48:43.883111    6543 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0927 10:48:43.914493    6543 cache.go:162] opening:  /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0927 10:48:44.067696    6543 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0927 10:48:44.067758    6543 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 662.715375ms
	I0927 10:48:44.067810    6543 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0927 10:48:45.752035    6543 start.go:128] duration metric: took 2.346464875s to createHost
	I0927 10:48:45.752123    6543 start.go:83] releasing machines lock for "no-preload-684000", held for 2.346672625s
	W0927 10:48:45.752181    6543 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:45.771942    6543 out.go:177] * Deleting "no-preload-684000" in qemu2 ...
	W0927 10:48:45.808694    6543 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:45.808722    6543 start.go:729] Will try again in 5 seconds ...
	I0927 10:48:46.907514    6543 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0927 10:48:46.907571    6543 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.5024415s
	I0927 10:48:46.907610    6543 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0927 10:48:47.213694    6543 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0927 10:48:47.213743    6543 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 3.80862775s
	I0927 10:48:47.213771    6543 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0927 10:48:47.221799    6543 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0927 10:48:47.221833    6543 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.81673025s
	I0927 10:48:47.221854    6543 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0927 10:48:48.437255    6543 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0927 10:48:48.437279    6543 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.032404584s
	I0927 10:48:48.437295    6543 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0927 10:48:48.776152    6543 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0927 10:48:48.776198    6543 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 5.371328334s
	I0927 10:48:48.776221    6543 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0927 10:48:50.808933    6543 start.go:360] acquireMachinesLock for no-preload-684000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:50.809318    6543 start.go:364] duration metric: took 310.625µs to acquireMachinesLock for "no-preload-684000"
	I0927 10:48:50.809437    6543 start.go:93] Provisioning new machine with config: &{Name:no-preload-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:50.809751    6543 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:50.815330    6543 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:48:50.867875    6543 start.go:159] libmachine.API.Create for "no-preload-684000" (driver="qemu2")
	I0927 10:48:50.867951    6543 client.go:168] LocalClient.Create starting
	I0927 10:48:50.868073    6543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:50.868143    6543 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:50.868161    6543 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:50.868234    6543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:50.868277    6543 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:50.868298    6543 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:50.868767    6543 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:51.064598    6543 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:51.357203    6543 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:51.357217    6543 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:51.357473    6543 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:48:51.367090    6543 main.go:141] libmachine: STDOUT: 
	I0927 10:48:51.367110    6543 main.go:141] libmachine: STDERR: 
	I0927 10:48:51.367178    6543 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2 +20000M
	I0927 10:48:51.375162    6543 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:51.375175    6543 main.go:141] libmachine: STDERR: 
	I0927 10:48:51.375186    6543 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:48:51.375191    6543 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:51.375203    6543 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:51.375238    6543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:71:58:64:16:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:48:51.376931    6543 main.go:141] libmachine: STDOUT: 
	I0927 10:48:51.376944    6543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:51.376955    6543 client.go:171] duration metric: took 509.007292ms to LocalClient.Create
	I0927 10:48:52.389400    6543 cache.go:157] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0927 10:48:52.389455    6543 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.984428833s
	I0927 10:48:52.389503    6543 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0927 10:48:52.389548    6543 cache.go:87] Successfully saved all images to host disk.
	I0927 10:48:53.379253    6543 start.go:128] duration metric: took 2.56950675s to createHost
	I0927 10:48:53.379346    6543 start.go:83] releasing machines lock for "no-preload-684000", held for 2.570078583s
	W0927 10:48:53.379685    6543 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:53.393882    6543 out.go:201] 
	W0927 10:48:53.397859    6543 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:53.397886    6543 out.go:270] * 
	* 
	W0927 10:48:53.400342    6543 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:53.409832    6543 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-684000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (58.700583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-011000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (31.842666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-011000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.801875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (29.252958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-011000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (29.324625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-011000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-011000 --alsologtostderr -v=1: exit status 83 (44.062ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-011000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-011000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:47.470727    6596 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:47.471093    6596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:47.471097    6596 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:47.471099    6596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:47.471286    6596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:47.471480    6596 out.go:352] Setting JSON to false
	I0927 10:48:47.471489    6596 mustload.go:65] Loading cluster: old-k8s-version-011000
	I0927 10:48:47.471710    6596 config.go:182] Loaded profile config "old-k8s-version-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0927 10:48:47.476014    6596 out.go:177] * The control-plane node old-k8s-version-011000 host is not running: state=Stopped
	I0927 10:48:47.484024    6596 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-011000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-011000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (28.779667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (29.372417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-936000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-936000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.946988333s)

                                                
                                                
-- stdout --
	* [embed-certs-936000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-936000" primary control-plane node in "embed-certs-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:47.794169    6613 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:47.794295    6613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:47.794299    6613 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:47.794301    6613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:47.794416    6613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:47.795479    6613 out.go:352] Setting JSON to false
	I0927 10:48:47.811871    6613 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4691,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:48:47.811969    6613 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:48:47.816051    6613 out.go:177] * [embed-certs-936000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:48:47.824127    6613 notify.go:220] Checking for updates...
	I0927 10:48:47.827963    6613 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:48:47.835988    6613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:48:47.844014    6613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:48:47.851990    6613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:48:47.858904    6613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:48:47.865986    6613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:48:47.870310    6613 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:48:47.870374    6613 config.go:182] Loaded profile config "no-preload-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:48:47.870417    6613 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:48:47.874034    6613 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:48:47.881962    6613 start.go:297] selected driver: qemu2
	I0927 10:48:47.881967    6613 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:48:47.881972    6613 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:48:47.884239    6613 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:48:47.888021    6613 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:48:47.892114    6613 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:48:47.892133    6613 cni.go:84] Creating CNI manager for ""
	I0927 10:48:47.892156    6613 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:48:47.892160    6613 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:48:47.892193    6613 start.go:340] cluster config:
	{Name:embed-certs-936000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:47.896002    6613 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:47.904021    6613 out.go:177] * Starting "embed-certs-936000" primary control-plane node in "embed-certs-936000" cluster
	I0927 10:48:47.907065    6613 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:48:47.907082    6613 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:48:47.907095    6613 cache.go:56] Caching tarball of preloaded images
	I0927 10:48:47.907163    6613 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:48:47.907169    6613 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:48:47.907229    6613 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/embed-certs-936000/config.json ...
	I0927 10:48:47.907239    6613 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/embed-certs-936000/config.json: {Name:mk22fea78ca812fb49523248043606e29fe0a533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:48:47.907463    6613 start.go:360] acquireMachinesLock for embed-certs-936000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:47.907496    6613 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "embed-certs-936000"
	I0927 10:48:47.907508    6613 start.go:93] Provisioning new machine with config: &{Name:embed-certs-936000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:47.907551    6613 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:47.916008    6613 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:48:47.933883    6613 start.go:159] libmachine.API.Create for "embed-certs-936000" (driver="qemu2")
	I0927 10:48:47.933910    6613 client.go:168] LocalClient.Create starting
	I0927 10:48:47.933990    6613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:47.934019    6613 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:47.934029    6613 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:47.934086    6613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:47.934109    6613 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:47.934115    6613 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:47.934478    6613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:48.093666    6613 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:48.233997    6613 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:48.234004    6613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:48.234209    6613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:48:48.243655    6613 main.go:141] libmachine: STDOUT: 
	I0927 10:48:48.243671    6613 main.go:141] libmachine: STDERR: 
	I0927 10:48:48.243724    6613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2 +20000M
	I0927 10:48:48.251768    6613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:48.251783    6613 main.go:141] libmachine: STDERR: 
	I0927 10:48:48.251794    6613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:48:48.251810    6613 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:48.251827    6613 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:48.251853    6613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a1:f4:01:03:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:48:48.253534    6613 main.go:141] libmachine: STDOUT: 
	I0927 10:48:48.253549    6613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:48.253567    6613 client.go:171] duration metric: took 319.660541ms to LocalClient.Create
	I0927 10:48:50.255754    6613 start.go:128] duration metric: took 2.348249875s to createHost
	I0927 10:48:50.255814    6613 start.go:83] releasing machines lock for "embed-certs-936000", held for 2.348375417s
	W0927 10:48:50.255877    6613 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:50.268190    6613 out.go:177] * Deleting "embed-certs-936000" in qemu2 ...
	W0927 10:48:50.311350    6613 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:50.311371    6613 start.go:729] Will try again in 5 seconds ...
	I0927 10:48:55.313444    6613 start.go:360] acquireMachinesLock for embed-certs-936000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:55.313924    6613 start.go:364] duration metric: took 332.709µs to acquireMachinesLock for "embed-certs-936000"
	I0927 10:48:55.314105    6613 start.go:93] Provisioning new machine with config: &{Name:embed-certs-936000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:48:55.314380    6613 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:48:55.322106    6613 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:48:55.373692    6613 start.go:159] libmachine.API.Create for "embed-certs-936000" (driver="qemu2")
	I0927 10:48:55.373754    6613 client.go:168] LocalClient.Create starting
	I0927 10:48:55.373857    6613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:48:55.373898    6613 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:55.373917    6613 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:55.374002    6613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:48:55.374031    6613 main.go:141] libmachine: Decoding PEM data...
	I0927 10:48:55.374046    6613 main.go:141] libmachine: Parsing certificate...
	I0927 10:48:55.374612    6613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:48:55.544897    6613 main.go:141] libmachine: Creating SSH key...
	I0927 10:48:55.659409    6613 main.go:141] libmachine: Creating Disk image...
	I0927 10:48:55.659417    6613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:48:55.659615    6613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:48:55.668536    6613 main.go:141] libmachine: STDOUT: 
	I0927 10:48:55.668555    6613 main.go:141] libmachine: STDERR: 
	I0927 10:48:55.668611    6613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2 +20000M
	I0927 10:48:55.676608    6613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:48:55.676628    6613 main.go:141] libmachine: STDERR: 
	I0927 10:48:55.676642    6613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:48:55.676647    6613 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:48:55.676655    6613 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:55.676692    6613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:23:e0:5a:a6:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:48:55.678283    6613 main.go:141] libmachine: STDOUT: 
	I0927 10:48:55.678298    6613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:55.678310    6613 client.go:171] duration metric: took 304.559792ms to LocalClient.Create
	I0927 10:48:57.678343    6613 start.go:128] duration metric: took 2.363984875s to createHost
	I0927 10:48:57.678366    6613 start.go:83] releasing machines lock for "embed-certs-936000", held for 2.3644865s
	W0927 10:48:57.678438    6613 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:57.684934    6613 out.go:201] 
	W0927 10:48:57.689016    6613 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:57.689050    6613 out.go:270] * 
	* 
	W0927 10:48:57.689462    6613 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:48:57.700067    6613 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-936000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (29.59775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-684000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-684000 create -f testdata/busybox.yaml: exit status 1 (29.235791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-684000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-684000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (29.486041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (29.295875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-684000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0927 10:48:53.585193    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-684000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-684000 describe deploy/metrics-server -n kube-system: exit status 1 (26.374291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-684000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-684000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (29.921875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-684000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-684000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.208127209s)

                                                
                                                
-- stdout --
	* [no-preload-684000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-684000" primary control-plane node in "no-preload-684000" cluster
	* Restarting existing qemu2 VM for "no-preload-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:48:57.771235    6669 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:48:57.771367    6669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:57.771371    6669 out.go:358] Setting ErrFile to fd 2...
	I0927 10:48:57.771374    6669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:48:57.771513    6669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:48:57.772752    6669 out.go:352] Setting JSON to false
	I0927 10:48:57.790700    6669 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4701,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:48:57.790770    6669 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:48:57.796150    6669 out.go:177] * [no-preload-684000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:48:57.802829    6669 notify.go:220] Checking for updates...
	I0927 10:48:57.808072    6669 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:48:57.815037    6669 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:48:57.821987    6669 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:48:57.828962    6669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:48:57.836074    6669 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:48:57.841984    6669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:48:57.845392    6669 config.go:182] Loaded profile config "no-preload-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:48:57.845640    6669 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:48:57.848981    6669 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:48:57.855960    6669 start.go:297] selected driver: qemu2
	I0927 10:48:57.855972    6669 start.go:901] validating driver "qemu2" against &{Name:no-preload-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:57.856036    6669 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:48:57.858778    6669 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:48:57.858807    6669 cni.go:84] Creating CNI manager for ""
	I0927 10:48:57.858832    6669 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:48:57.858850    6669 start.go:340] cluster config:
	{Name:no-preload-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-684000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:48:57.862054    6669 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.865989    6669 out.go:177] * Starting "no-preload-684000" primary control-plane node in "no-preload-684000" cluster
	I0927 10:48:57.874964    6669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:48:57.875084    6669 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/no-preload-684000/config.json ...
	I0927 10:48:57.875076    6669 cache.go:107] acquiring lock: {Name:mkf48093fa971191f71c46f781d51b0e356458e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875078    6669 cache.go:107] acquiring lock: {Name:mk865a669a22d5796e2286bf0736b1694aa96165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875081    6669 cache.go:107] acquiring lock: {Name:mk24e47a2323d63a00172a2ddaddb4a1994e7650 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875119    6669 cache.go:107] acquiring lock: {Name:mkda553429935654ab1b40dd60951d922c0a511b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875162    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0927 10:48:57.875164    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0927 10:48:57.875167    6669 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 102.875µs
	I0927 10:48:57.875162    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0927 10:48:57.875174    6669 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0927 10:48:57.875174    6669 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 104.5µs
	I0927 10:48:57.875188    6669 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0927 10:48:57.875168    6669 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 49.791µs
	I0927 10:48:57.875192    6669 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0927 10:48:57.875174    6669 cache.go:107] acquiring lock: {Name:mk04dfbba0912ab743b61e0c2651ed303f57fbf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875192    6669 cache.go:107] acquiring lock: {Name:mkd789685227a83ec0fe6bd238a1915d538e340b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875207    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0927 10:48:57.875211    6669 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 140.334µs
	I0927 10:48:57.875215    6669 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0927 10:48:57.875223    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0927 10:48:57.875224    6669 cache.go:107] acquiring lock: {Name:mk8686aeb51b4f49a20a79af2042c07c2f411e73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875230    6669 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 56.583µs
	I0927 10:48:57.875234    6669 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0927 10:48:57.875240    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0927 10:48:57.875245    6669 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 81.333µs
	I0927 10:48:57.875254    6669 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0927 10:48:57.875266    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0927 10:48:57.875265    6669 cache.go:107] acquiring lock: {Name:mka49bbed88dac9760198ae1cf1bf744cd6a8f6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:48:57.875273    6669 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 65.708µs
	I0927 10:48:57.875277    6669 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0927 10:48:57.875313    6669 cache.go:115] /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0927 10:48:57.875317    6669 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 133.708µs
	I0927 10:48:57.875323    6669 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0927 10:48:57.875331    6669 cache.go:87] Successfully saved all images to host disk.
	I0927 10:48:57.875932    6669 start.go:360] acquireMachinesLock for no-preload-684000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:48:57.875963    6669 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "no-preload-684000"
	I0927 10:48:57.875971    6669 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:48:57.875974    6669 fix.go:54] fixHost starting: 
	I0927 10:48:57.876082    6669 fix.go:112] recreateIfNeeded on no-preload-684000: state=Stopped err=<nil>
	W0927 10:48:57.876092    6669 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:48:57.886038    6669 out.go:177] * Restarting existing qemu2 VM for "no-preload-684000" ...
	I0927 10:48:57.890022    6669 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:48:57.890064    6669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:71:58:64:16:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:48:57.891994    6669 main.go:141] libmachine: STDOUT: 
	I0927 10:48:57.892008    6669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:48:57.892034    6669 fix.go:56] duration metric: took 16.057708ms for fixHost
	I0927 10:48:57.892039    6669 start.go:83] releasing machines lock for "no-preload-684000", held for 16.07225ms
	W0927 10:48:57.892046    6669 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:48:57.892075    6669 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:48:57.892079    6669 start.go:729] Will try again in 5 seconds ...
	I0927 10:49:02.894181    6669 start.go:360] acquireMachinesLock for no-preload-684000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:02.894558    6669 start.go:364] duration metric: took 290.334µs to acquireMachinesLock for "no-preload-684000"
	I0927 10:49:02.894686    6669 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:49:02.894709    6669 fix.go:54] fixHost starting: 
	I0927 10:49:02.895475    6669 fix.go:112] recreateIfNeeded on no-preload-684000: state=Stopped err=<nil>
	W0927 10:49:02.895501    6669 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:49:02.900864    6669 out.go:177] * Restarting existing qemu2 VM for "no-preload-684000" ...
	I0927 10:49:02.904865    6669 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:02.905107    6669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:71:58:64:16:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/no-preload-684000/disk.qcow2
	I0927 10:49:02.914438    6669 main.go:141] libmachine: STDOUT: 
	I0927 10:49:02.914489    6669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:02.914584    6669 fix.go:56] duration metric: took 19.878375ms for fixHost
	I0927 10:49:02.914602    6669 start.go:83] releasing machines lock for "no-preload-684000", held for 20.025458ms
	W0927 10:49:02.914800    6669 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:02.921824    6669 out.go:201] 
	W0927 10:49:02.925896    6669 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:02.925922    6669 out.go:270] * 
	* 
	W0927 10:49:02.928514    6669 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:49:02.935942    6669 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-684000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (66.606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-936000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-936000 create -f testdata/busybox.yaml: exit status 1 (28.362959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-936000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-936000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (31.382667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (32.5225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-936000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-936000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-936000 describe deploy/metrics-server -n kube-system: exit status 1 (27.630667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-936000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-936000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (29.733333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-936000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-936000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.198065667s)

                                                
                                                
-- stdout --
	* [embed-certs-936000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-936000" primary control-plane node in "embed-certs-936000" cluster
	* Restarting existing qemu2 VM for "embed-certs-936000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-936000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:01.682129    6710 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:01.682272    6710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:01.682275    6710 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:01.682277    6710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:01.682403    6710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:01.683452    6710 out.go:352] Setting JSON to false
	I0927 10:49:01.699326    6710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4705,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:49:01.699400    6710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:49:01.704051    6710 out.go:177] * [embed-certs-936000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:49:01.710859    6710 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:49:01.710930    6710 notify.go:220] Checking for updates...
	I0927 10:49:01.719030    6710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:49:01.722000    6710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:49:01.724971    6710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:49:01.728023    6710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:49:01.730934    6710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:49:01.734218    6710 config.go:182] Loaded profile config "embed-certs-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:01.734494    6710 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:49:01.738980    6710 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:49:01.746036    6710 start.go:297] selected driver: qemu2
	I0927 10:49:01.746043    6710 start.go:901] validating driver "qemu2" against &{Name:embed-certs-936000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:01.746097    6710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:49:01.748414    6710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:49:01.748443    6710 cni.go:84] Creating CNI manager for ""
	I0927 10:49:01.748465    6710 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:49:01.748489    6710 start.go:340] cluster config:
	{Name:embed-certs-936000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-936000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:01.751896    6710 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:49:01.760986    6710 out.go:177] * Starting "embed-certs-936000" primary control-plane node in "embed-certs-936000" cluster
	I0927 10:49:01.764991    6710 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:49:01.765005    6710 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:49:01.765013    6710 cache.go:56] Caching tarball of preloaded images
	I0927 10:49:01.765073    6710 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:49:01.765078    6710 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:49:01.765132    6710 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/embed-certs-936000/config.json ...
	I0927 10:49:01.765691    6710 start.go:360] acquireMachinesLock for embed-certs-936000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:01.765718    6710 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "embed-certs-936000"
	I0927 10:49:01.765726    6710 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:49:01.765730    6710 fix.go:54] fixHost starting: 
	I0927 10:49:01.765855    6710 fix.go:112] recreateIfNeeded on embed-certs-936000: state=Stopped err=<nil>
	W0927 10:49:01.765864    6710 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:49:01.773980    6710 out.go:177] * Restarting existing qemu2 VM for "embed-certs-936000" ...
	I0927 10:49:01.778014    6710 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:01.778045    6710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:23:e0:5a:a6:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:49:01.779981    6710 main.go:141] libmachine: STDOUT: 
	I0927 10:49:01.780000    6710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:01.780028    6710 fix.go:56] duration metric: took 14.296292ms for fixHost
	I0927 10:49:01.780032    6710 start.go:83] releasing machines lock for "embed-certs-936000", held for 14.310417ms
	W0927 10:49:01.780037    6710 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:01.780077    6710 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:01.780082    6710 start.go:729] Will try again in 5 seconds ...
	I0927 10:49:06.780701    6710 start.go:360] acquireMachinesLock for embed-certs-936000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:06.781194    6710 start.go:364] duration metric: took 362.666µs to acquireMachinesLock for "embed-certs-936000"
	I0927 10:49:06.781313    6710 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:49:06.781334    6710 fix.go:54] fixHost starting: 
	I0927 10:49:06.782075    6710 fix.go:112] recreateIfNeeded on embed-certs-936000: state=Stopped err=<nil>
	W0927 10:49:06.782101    6710 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:49:06.802341    6710 out.go:177] * Restarting existing qemu2 VM for "embed-certs-936000" ...
	I0927 10:49:06.806489    6710 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:06.806669    6710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:23:e0:5a:a6:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/embed-certs-936000/disk.qcow2
	I0927 10:49:06.814466    6710 main.go:141] libmachine: STDOUT: 
	I0927 10:49:06.814526    6710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:06.814618    6710 fix.go:56] duration metric: took 33.285292ms for fixHost
	I0927 10:49:06.814635    6710 start.go:83] releasing machines lock for "embed-certs-936000", held for 33.418583ms
	W0927 10:49:06.814856    6710 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-936000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-936000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:06.822468    6710 out.go:201] 
	W0927 10:49:06.825628    6710 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:06.825657    6710 out.go:270] * 
	* 
	W0927 10:49:06.828116    6710 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:49:06.838537    6710 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-936000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (66.409959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-684000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (32.162834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-684000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-684000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-684000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.921333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-684000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-684000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (29.7855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-684000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (29.796625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-684000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-684000 --alsologtostderr -v=1: exit status 83 (39.791083ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-684000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-684000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:03.206097    6729 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:03.206266    6729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:03.206269    6729 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:03.206271    6729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:03.206395    6729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:03.206605    6729 out.go:352] Setting JSON to false
	I0927 10:49:03.206613    6729 mustload.go:65] Loading cluster: no-preload-684000
	I0927 10:49:03.206825    6729 config.go:182] Loaded profile config "no-preload-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:03.210339    6729 out.go:177] * The control-plane node no-preload-684000 host is not running: state=Stopped
	I0927 10:49:03.213454    6729 out.go:177]   To start a cluster, run: "minikube start -p no-preload-684000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-684000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (29.511958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (29.262833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-488000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-488000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.953540209s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-488000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-488000" primary control-plane node in "default-k8s-diff-port-488000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-488000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:03.627083    6753 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:03.627219    6753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:03.627222    6753 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:03.627225    6753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:03.627351    6753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:03.628415    6753 out.go:352] Setting JSON to false
	I0927 10:49:03.644392    6753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4707,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:49:03.644450    6753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:49:03.649503    6753 out.go:177] * [default-k8s-diff-port-488000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:49:03.657460    6753 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:49:03.657496    6753 notify.go:220] Checking for updates...
	I0927 10:49:03.665364    6753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:49:03.669483    6753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:49:03.670869    6753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:49:03.675400    6753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:49:03.678492    6753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:49:03.680322    6753 config.go:182] Loaded profile config "embed-certs-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:03.680381    6753 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:03.680435    6753 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:49:03.684384    6753 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:49:03.691317    6753 start.go:297] selected driver: qemu2
	I0927 10:49:03.691323    6753 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:49:03.691329    6753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:49:03.693702    6753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 10:49:03.697405    6753 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:49:03.700472    6753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:49:03.700489    6753 cni.go:84] Creating CNI manager for ""
	I0927 10:49:03.700511    6753 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:49:03.700519    6753 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:49:03.700550    6753 start.go:340] cluster config:
	{Name:default-k8s-diff-port-488000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:03.704438    6753 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:49:03.711397    6753 out.go:177] * Starting "default-k8s-diff-port-488000" primary control-plane node in "default-k8s-diff-port-488000" cluster
	I0927 10:49:03.715483    6753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:49:03.715500    6753 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:49:03.715509    6753 cache.go:56] Caching tarball of preloaded images
	I0927 10:49:03.715568    6753 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:49:03.715574    6753 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:49:03.715647    6753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/default-k8s-diff-port-488000/config.json ...
	I0927 10:49:03.715659    6753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/default-k8s-diff-port-488000/config.json: {Name:mk740720d22f12742334fcbc79ae7a4fc2a6224a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:49:03.715896    6753 start.go:360] acquireMachinesLock for default-k8s-diff-port-488000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:03.715937    6753 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "default-k8s-diff-port-488000"
	I0927 10:49:03.715951    6753 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:49:03.715998    6753 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:49:03.724460    6753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:49:03.742685    6753 start.go:159] libmachine.API.Create for "default-k8s-diff-port-488000" (driver="qemu2")
	I0927 10:49:03.742717    6753 client.go:168] LocalClient.Create starting
	I0927 10:49:03.742784    6753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:49:03.742823    6753 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:03.742832    6753 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:03.742880    6753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:49:03.742904    6753 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:03.742912    6753 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:03.743373    6753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:49:03.911695    6753 main.go:141] libmachine: Creating SSH key...
	I0927 10:49:04.068627    6753 main.go:141] libmachine: Creating Disk image...
	I0927 10:49:04.068633    6753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:49:04.068838    6753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:04.078442    6753 main.go:141] libmachine: STDOUT: 
	I0927 10:49:04.078458    6753 main.go:141] libmachine: STDERR: 
	I0927 10:49:04.078524    6753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2 +20000M
	I0927 10:49:04.086356    6753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:49:04.086370    6753 main.go:141] libmachine: STDERR: 
	I0927 10:49:04.086381    6753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:04.086393    6753 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:49:04.086407    6753 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:04.086433    6753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e0:ed:02:7c:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:04.088110    6753 main.go:141] libmachine: STDOUT: 
	I0927 10:49:04.088123    6753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:04.088141    6753 client.go:171] duration metric: took 345.427709ms to LocalClient.Create
	I0927 10:49:06.090315    6753 start.go:128] duration metric: took 2.374319292s to createHost
	I0927 10:49:06.090370    6753 start.go:83] releasing machines lock for "default-k8s-diff-port-488000", held for 2.3744895s
	W0927 10:49:06.090434    6753 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:06.106774    6753 out.go:177] * Deleting "default-k8s-diff-port-488000" in qemu2 ...
	W0927 10:49:06.142626    6753 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:06.142645    6753 start.go:729] Will try again in 5 seconds ...
	I0927 10:49:11.144734    6753 start.go:360] acquireMachinesLock for default-k8s-diff-port-488000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:11.145224    6753 start.go:364] duration metric: took 371.708µs to acquireMachinesLock for "default-k8s-diff-port-488000"
	I0927 10:49:11.145358    6753 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:49:11.145596    6753 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:49:11.150080    6753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:49:11.200366    6753 start.go:159] libmachine.API.Create for "default-k8s-diff-port-488000" (driver="qemu2")
	I0927 10:49:11.200427    6753 client.go:168] LocalClient.Create starting
	I0927 10:49:11.200637    6753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:49:11.200715    6753 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:11.200732    6753 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:11.200805    6753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:49:11.200853    6753 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:11.200868    6753 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:11.203332    6753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:49:11.382075    6753 main.go:141] libmachine: Creating SSH key...
	I0927 10:49:11.484186    6753 main.go:141] libmachine: Creating Disk image...
	I0927 10:49:11.484193    6753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:49:11.484377    6753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:11.493742    6753 main.go:141] libmachine: STDOUT: 
	I0927 10:49:11.493759    6753 main.go:141] libmachine: STDERR: 
	I0927 10:49:11.493818    6753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2 +20000M
	I0927 10:49:11.501617    6753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:49:11.501636    6753 main.go:141] libmachine: STDERR: 
	I0927 10:49:11.501650    6753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:11.501660    6753 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:49:11.501669    6753 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:11.501694    6753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:85:14:e4:e9:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:11.503428    6753 main.go:141] libmachine: STDOUT: 
	I0927 10:49:11.503440    6753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:11.503455    6753 client.go:171] duration metric: took 303.031125ms to LocalClient.Create
	I0927 10:49:13.505579    6753 start.go:128] duration metric: took 2.360022208s to createHost
	I0927 10:49:13.505641    6753 start.go:83] releasing machines lock for "default-k8s-diff-port-488000", held for 2.360460917s
	W0927 10:49:13.505974    6753 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:13.516605    6753 out.go:201] 
	W0927 10:49:13.525668    6753 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:13.525697    6753 out.go:270] * 
	* 
	W0927 10:49:13.528349    6753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:49:13.537567    6753 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-488000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (64.43ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-936000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (31.691708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-936000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-936000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-936000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.443375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-936000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-936000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (29.705708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-936000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (29.148375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-936000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-936000 --alsologtostderr -v=1: exit status 83 (40.433459ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-936000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-936000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:07.106438    6775 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:07.106601    6775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:07.106604    6775 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:07.106607    6775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:07.106748    6775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:07.106986    6775 out.go:352] Setting JSON to false
	I0927 10:49:07.106996    6775 mustload.go:65] Loading cluster: embed-certs-936000
	I0927 10:49:07.107241    6775 config.go:182] Loaded profile config "embed-certs-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:07.111504    6775 out.go:177] * The control-plane node embed-certs-936000 host is not running: state=Stopped
	I0927 10:49:07.115398    6775 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-936000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-936000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (29.256959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (29.406916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
E0927 10:49:12.460395    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.958076125s)

                                                
                                                
-- stdout --
	* [newest-cni-367000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-367000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:07.420385    6792 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:07.420522    6792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:07.420525    6792 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:07.420528    6792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:07.420665    6792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:07.421728    6792 out.go:352] Setting JSON to false
	I0927 10:49:07.438027    6792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4711,"bootTime":1727454636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:49:07.438103    6792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:49:07.442499    6792 out.go:177] * [newest-cni-367000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:49:07.449519    6792 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:49:07.449572    6792 notify.go:220] Checking for updates...
	I0927 10:49:07.456430    6792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:49:07.459431    6792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:49:07.462376    6792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:49:07.465407    6792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:49:07.468438    6792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:49:07.471807    6792 config.go:182] Loaded profile config "default-k8s-diff-port-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:07.471865    6792 config.go:182] Loaded profile config "multinode-874000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:07.471910    6792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:49:07.476344    6792 out.go:177] * Using the qemu2 driver based on user configuration
	I0927 10:49:07.483404    6792 start.go:297] selected driver: qemu2
	I0927 10:49:07.483414    6792 start.go:901] validating driver "qemu2" against <nil>
	I0927 10:49:07.483421    6792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:49:07.485873    6792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0927 10:49:07.485911    6792 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0927 10:49:07.493497    6792 out.go:177] * Automatically selected the socket_vmnet network
	I0927 10:49:07.496527    6792 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0927 10:49:07.496559    6792 cni.go:84] Creating CNI manager for ""
	I0927 10:49:07.496585    6792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:49:07.496590    6792 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 10:49:07.496623    6792 start.go:340] cluster config:
	{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:07.500458    6792 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:49:07.507451    6792 out.go:177] * Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	I0927 10:49:07.511456    6792 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:49:07.511470    6792 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:49:07.511480    6792 cache.go:56] Caching tarball of preloaded images
	I0927 10:49:07.511553    6792 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:49:07.511559    6792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:49:07.511626    6792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/newest-cni-367000/config.json ...
	I0927 10:49:07.511637    6792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/newest-cni-367000/config.json: {Name:mkce1155749e4979aea1ca463f968a7334d710c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 10:49:07.512037    6792 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:07.512078    6792 start.go:364] duration metric: took 32.084µs to acquireMachinesLock for "newest-cni-367000"
	I0927 10:49:07.512091    6792 start.go:93] Provisioning new machine with config: &{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:49:07.512136    6792 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:49:07.519406    6792 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:49:07.537382    6792 start.go:159] libmachine.API.Create for "newest-cni-367000" (driver="qemu2")
	I0927 10:49:07.537412    6792 client.go:168] LocalClient.Create starting
	I0927 10:49:07.537483    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:49:07.537514    6792 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:07.537523    6792 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:07.537560    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:49:07.537587    6792 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:07.537593    6792 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:07.537926    6792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:49:07.697572    6792 main.go:141] libmachine: Creating SSH key...
	I0927 10:49:07.763012    6792 main.go:141] libmachine: Creating Disk image...
	I0927 10:49:07.763017    6792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:49:07.763192    6792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:07.772474    6792 main.go:141] libmachine: STDOUT: 
	I0927 10:49:07.772494    6792 main.go:141] libmachine: STDERR: 
	I0927 10:49:07.772567    6792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2 +20000M
	I0927 10:49:07.780620    6792 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:49:07.780639    6792 main.go:141] libmachine: STDERR: 
	I0927 10:49:07.780663    6792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:07.780667    6792 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:49:07.780680    6792 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:07.780722    6792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b1:10:63:79:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:07.782377    6792 main.go:141] libmachine: STDOUT: 
	I0927 10:49:07.782391    6792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:07.782412    6792 client.go:171] duration metric: took 245.000959ms to LocalClient.Create
	I0927 10:49:09.784532    6792 start.go:128] duration metric: took 2.272433583s to createHost
	I0927 10:49:09.784633    6792 start.go:83] releasing machines lock for "newest-cni-367000", held for 2.272584583s
	W0927 10:49:09.784701    6792 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:09.794784    6792 out.go:177] * Deleting "newest-cni-367000" in qemu2 ...
	W0927 10:49:09.835044    6792 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:09.835060    6792 start.go:729] Will try again in 5 seconds ...
	I0927 10:49:14.837188    6792 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:14.837621    6792 start.go:364] duration metric: took 345.042µs to acquireMachinesLock for "newest-cni-367000"
	I0927 10:49:14.837800    6792 start.go:93] Provisioning new machine with config: &{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 10:49:14.838078    6792 start.go:125] createHost starting for "" (driver="qemu2")
	I0927 10:49:14.843874    6792 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 10:49:14.894920    6792 start.go:159] libmachine.API.Create for "newest-cni-367000" (driver="qemu2")
	I0927 10:49:14.894972    6792 client.go:168] LocalClient.Create starting
	I0927 10:49:14.895068    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/ca.pem
	I0927 10:49:14.895113    6792 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:14.895131    6792 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:14.895199    6792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19712-1508/.minikube/certs/cert.pem
	I0927 10:49:14.895228    6792 main.go:141] libmachine: Decoding PEM data...
	I0927 10:49:14.895241    6792 main.go:141] libmachine: Parsing certificate...
	I0927 10:49:14.895842    6792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0927 10:49:15.077991    6792 main.go:141] libmachine: Creating SSH key...
	I0927 10:49:15.280198    6792 main.go:141] libmachine: Creating Disk image...
	I0927 10:49:15.280208    6792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0927 10:49:15.280429    6792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:15.289662    6792 main.go:141] libmachine: STDOUT: 
	I0927 10:49:15.289686    6792 main.go:141] libmachine: STDERR: 
	I0927 10:49:15.289756    6792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2 +20000M
	I0927 10:49:15.297739    6792 main.go:141] libmachine: STDOUT: Image resized.
	
	I0927 10:49:15.297756    6792 main.go:141] libmachine: STDERR: 
	I0927 10:49:15.297769    6792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:15.297777    6792 main.go:141] libmachine: Starting QEMU VM...
	I0927 10:49:15.297787    6792 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:15.297828    6792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:06:45:42:40:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:15.299434    6792 main.go:141] libmachine: STDOUT: 
	I0927 10:49:15.299449    6792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:15.299461    6792 client.go:171] duration metric: took 404.494292ms to LocalClient.Create
	I0927 10:49:17.301605    6792 start.go:128] duration metric: took 2.463551416s to createHost
	I0927 10:49:17.301680    6792 start.go:83] releasing machines lock for "newest-cni-367000", held for 2.464102375s
	W0927 10:49:17.301999    6792 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:17.314497    6792 out.go:201] 
	W0927 10:49:17.321577    6792 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:17.321604    6792 out.go:270] * 
	* 
	W0927 10:49:17.324177    6792 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:49:17.336458    6792 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (62.58975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-488000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-488000 create -f testdata/busybox.yaml: exit status 1 (29.732583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-488000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-488000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (29.294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (29.531959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-488000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-488000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-488000 describe deploy/metrics-server -n kube-system: exit status 1 (26.28125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-488000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-488000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (29.16325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-488000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-488000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.335433333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-488000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-488000" primary control-plane node in "default-k8s-diff-port-488000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:16.090826    6840 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:16.090946    6840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:16.090949    6840 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:16.090951    6840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:16.091086    6840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:16.092034    6840 out.go:352] Setting JSON to false
	I0927 10:49:16.108564    6840 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4720,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:49:16.108638    6840 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:49:16.113786    6840 out.go:177] * [default-k8s-diff-port-488000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:49:16.121730    6840 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:49:16.121820    6840 notify.go:220] Checking for updates...
	I0927 10:49:16.128656    6840 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:49:16.131705    6840 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:49:16.134627    6840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:49:16.137711    6840 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:49:16.140783    6840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:49:16.142544    6840 config.go:182] Loaded profile config "default-k8s-diff-port-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:16.142794    6840 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:49:16.147679    6840 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:49:16.154548    6840 start.go:297] selected driver: qemu2
	I0927 10:49:16.154554    6840 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:16.154597    6840 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:49:16.157000    6840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 10:49:16.157026    6840 cni.go:84] Creating CNI manager for ""
	I0927 10:49:16.157048    6840 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:49:16.157072    6840 start.go:340] cluster config:
	{Name:default-k8s-diff-port-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-488000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:16.160748    6840 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:49:16.168761    6840 out.go:177] * Starting "default-k8s-diff-port-488000" primary control-plane node in "default-k8s-diff-port-488000" cluster
	I0927 10:49:16.172689    6840 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:49:16.172712    6840 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:49:16.172722    6840 cache.go:56] Caching tarball of preloaded images
	I0927 10:49:16.172777    6840 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:49:16.172783    6840 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:49:16.172840    6840 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/default-k8s-diff-port-488000/config.json ...
	I0927 10:49:16.173322    6840 start.go:360] acquireMachinesLock for default-k8s-diff-port-488000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:17.301805    6840 start.go:364] duration metric: took 1.128469167s to acquireMachinesLock for "default-k8s-diff-port-488000"
	I0927 10:49:17.301991    6840 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:49:17.302060    6840 fix.go:54] fixHost starting: 
	I0927 10:49:17.302798    6840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-488000: state=Stopped err=<nil>
	W0927 10:49:17.302848    6840 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:49:17.318502    6840 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-488000" ...
	I0927 10:49:17.325481    6840 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:17.325657    6840 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:85:14:e4:e9:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:17.336309    6840 main.go:141] libmachine: STDOUT: 
	I0927 10:49:17.336415    6840 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:17.336554    6840 fix.go:56] duration metric: took 34.523209ms for fixHost
	I0927 10:49:17.336573    6840 start.go:83] releasing machines lock for "default-k8s-diff-port-488000", held for 34.704875ms
	W0927 10:49:17.336608    6840 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:17.336756    6840 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:17.336775    6840 start.go:729] Will try again in 5 seconds ...
	I0927 10:49:22.338883    6840 start.go:360] acquireMachinesLock for default-k8s-diff-port-488000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:22.339300    6840 start.go:364] duration metric: took 299.833µs to acquireMachinesLock for "default-k8s-diff-port-488000"
	I0927 10:49:22.339862    6840 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:49:22.339890    6840 fix.go:54] fixHost starting: 
	I0927 10:49:22.340700    6840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-488000: state=Stopped err=<nil>
	W0927 10:49:22.340732    6840 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:49:22.346322    6840 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-488000" ...
	I0927 10:49:22.354159    6840 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:22.354384    6840 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:85:14:e4:e9:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/default-k8s-diff-port-488000/disk.qcow2
	I0927 10:49:22.363485    6840 main.go:141] libmachine: STDOUT: 
	I0927 10:49:22.363541    6840 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:22.363638    6840 fix.go:56] duration metric: took 23.752583ms for fixHost
	I0927 10:49:22.363665    6840 start.go:83] releasing machines lock for "default-k8s-diff-port-488000", held for 24.340125ms
	W0927 10:49:22.363860    6840 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:22.371102    6840 out.go:201] 
	W0927 10:49:22.375180    6840 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:22.375204    6840 out.go:270] * 
	* 
	W0927 10:49:22.377661    6840 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:49:22.385142    6840 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-488000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (67.052375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.186784209s)

                                                
                                                
-- stdout --
	* [newest-cni-367000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	* Restarting existing qemu2 VM for "newest-cni-367000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-367000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:20.927852    6873 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:20.927991    6873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:20.927994    6873 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:20.927997    6873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:20.928110    6873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:20.929160    6873 out.go:352] Setting JSON to false
	I0927 10:49:20.945233    6873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4724,"bootTime":1727454636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:49:20.945307    6873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:49:20.949634    6873 out.go:177] * [newest-cni-367000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:49:20.956641    6873 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:49:20.956669    6873 notify.go:220] Checking for updates...
	I0927 10:49:20.964511    6873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:49:20.967587    6873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:49:20.970628    6873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:49:20.973539    6873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:49:20.976585    6873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:49:20.979858    6873 config.go:182] Loaded profile config "newest-cni-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:20.980134    6873 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:49:20.984517    6873 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:49:20.991603    6873 start.go:297] selected driver: qemu2
	I0927 10:49:20.991610    6873 start.go:901] validating driver "qemu2" against &{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:20.991680    6873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:49:20.994027    6873 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0927 10:49:20.994050    6873 cni.go:84] Creating CNI manager for ""
	I0927 10:49:20.994075    6873 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 10:49:20.994112    6873 start.go:340] cluster config:
	{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-367000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:49:20.997664    6873 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 10:49:21.005604    6873 out.go:177] * Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	I0927 10:49:21.009547    6873 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 10:49:21.009564    6873 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 10:49:21.009575    6873 cache.go:56] Caching tarball of preloaded images
	I0927 10:49:21.009633    6873 preload.go:172] Found /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 10:49:21.009645    6873 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 10:49:21.009704    6873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/newest-cni-367000/config.json ...
	I0927 10:49:21.010195    6873 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:21.010226    6873 start.go:364] duration metric: took 23.333µs to acquireMachinesLock for "newest-cni-367000"
	I0927 10:49:21.010235    6873 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:49:21.010240    6873 fix.go:54] fixHost starting: 
	I0927 10:49:21.010359    6873 fix.go:112] recreateIfNeeded on newest-cni-367000: state=Stopped err=<nil>
	W0927 10:49:21.010368    6873 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:49:21.014607    6873 out.go:177] * Restarting existing qemu2 VM for "newest-cni-367000" ...
	I0927 10:49:21.022536    6873 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:21.022569    6873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:06:45:42:40:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:21.024771    6873 main.go:141] libmachine: STDOUT: 
	I0927 10:49:21.024792    6873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:21.024825    6873 fix.go:56] duration metric: took 14.584959ms for fixHost
	I0927 10:49:21.024831    6873 start.go:83] releasing machines lock for "newest-cni-367000", held for 14.601ms
	W0927 10:49:21.024837    6873 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:21.024876    6873 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:21.024881    6873 start.go:729] Will try again in 5 seconds ...
	I0927 10:49:26.026992    6873 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mkc1273ce1564ef0395f86cc3421dbf28514bb70 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 10:49:26.027523    6873 start.go:364] duration metric: took 415.917µs to acquireMachinesLock for "newest-cni-367000"
	I0927 10:49:26.027656    6873 start.go:96] Skipping create...Using existing machine configuration
	I0927 10:49:26.027675    6873 fix.go:54] fixHost starting: 
	I0927 10:49:26.028458    6873 fix.go:112] recreateIfNeeded on newest-cni-367000: state=Stopped err=<nil>
	W0927 10:49:26.028483    6873 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 10:49:26.033946    6873 out.go:177] * Restarting existing qemu2 VM for "newest-cni-367000" ...
	I0927 10:49:26.042884    6873 qemu.go:418] Using hvf for hardware acceleration
	I0927 10:49:26.043104    6873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:06:45:42:40:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19712-1508/.minikube/machines/newest-cni-367000/disk.qcow2
	I0927 10:49:26.053003    6873 main.go:141] libmachine: STDOUT: 
	I0927 10:49:26.053056    6873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0927 10:49:26.053165    6873 fix.go:56] duration metric: took 25.49ms for fixHost
	I0927 10:49:26.053185    6873 start.go:83] releasing machines lock for "newest-cni-367000", held for 25.641291ms
	W0927 10:49:26.053373    6873 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0927 10:49:26.061844    6873 out.go:201] 
	W0927 10:49:26.064926    6873 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0927 10:49:26.064952    6873 out.go:270] * 
	* 
	W0927 10:49:26.067385    6873 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 10:49:26.074920    6873 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (68.502792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-488000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (31.899708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-488000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-488000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-488000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.246917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-488000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-488000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (29.465375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-488000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (29.087583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-488000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-488000 --alsologtostderr -v=1: exit status 83 (40.456ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-488000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:22.652052    6892 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:22.652209    6892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:22.652213    6892 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:22.652215    6892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:22.652345    6892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:22.652568    6892 out.go:352] Setting JSON to false
	I0927 10:49:22.652577    6892 mustload.go:65] Loading cluster: default-k8s-diff-port-488000
	I0927 10:49:22.652823    6892 config.go:182] Loaded profile config "default-k8s-diff-port-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:22.657180    6892 out.go:177] * The control-plane node default-k8s-diff-port-488000 host is not running: state=Stopped
	I0927 10:49:22.661066    6892 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-488000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-488000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (28.951292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (29.416375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-367000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (30.394916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-367000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-367000 --alsologtostderr -v=1: exit status 83 (42.202667ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-367000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-367000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:49:26.259357    6916 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:49:26.259514    6916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:26.259517    6916 out.go:358] Setting ErrFile to fd 2...
	I0927 10:49:26.259520    6916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:49:26.259635    6916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:49:26.259850    6916 out.go:352] Setting JSON to false
	I0927 10:49:26.259859    6916 mustload.go:65] Loading cluster: newest-cni-367000
	I0927 10:49:26.260077    6916 config.go:182] Loaded profile config "newest-cni-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:49:26.264192    6916 out.go:177] * The control-plane node newest-cni-367000 host is not running: state=Stopped
	I0927 10:49:26.268127    6916 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-367000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-367000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (29.913583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (30.267417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 8.43
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 197.78
29 TestAddons/serial/Volcano 38.88
31 TestAddons/serial/GCPAuth/Namespaces 0.08
34 TestAddons/parallel/Ingress 17.3
35 TestAddons/parallel/InspektorGadget 10.26
36 TestAddons/parallel/MetricsServer 6.29
38 TestAddons/parallel/CSI 52.41
39 TestAddons/parallel/Headlamp 16.61
40 TestAddons/parallel/CloudSpanner 5.19
41 TestAddons/parallel/LocalPath 41.02
42 TestAddons/parallel/NvidiaDevicePlugin 6.2
43 TestAddons/parallel/Yakd 11.26
44 TestAddons/StoppedEnableDisable 12.4
52 TestHyperKitDriverInstallOrUpdate 10.49
55 TestErrorSpam/setup 35.66
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.23
58 TestErrorSpam/pause 0.69
59 TestErrorSpam/unpause 0.63
60 TestErrorSpam/stop 55.3
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 73.4
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 36.08
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.05
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.73
72 TestFunctional/serial/CacheCmd/cache/add_local 1.65
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
76 TestFunctional/serial/CacheCmd/cache/cache_reload 0.63
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 2.42
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
80 TestFunctional/serial/ExtraConfig 38.64
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.66
83 TestFunctional/serial/LogsFileCmd 0.65
84 TestFunctional/serial/InvalidService 3.78
86 TestFunctional/parallel/ConfigCmd 0.22
87 TestFunctional/parallel/DashboardCmd 6.66
88 TestFunctional/parallel/DryRun 0.24
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.26
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 26.07
98 TestFunctional/parallel/SSHCmd 0.13
99 TestFunctional/parallel/CpCmd 0.38
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.38
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.11
110 TestFunctional/parallel/License 0.25
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 0.16
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
117 TestFunctional/parallel/ImageCommands/ImageBuild 1.9
118 TestFunctional/parallel/ImageCommands/Setup 1.78
119 TestFunctional/parallel/DockerEnv/bash 0.32
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.48
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.4
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.11
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
141 TestFunctional/parallel/ServiceCmd/DeployApp 6.1
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
143 TestFunctional/parallel/ProfileCmd/profile_list 0.13
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
145 TestFunctional/parallel/MountCmd/any-port 5.09
146 TestFunctional/parallel/MountCmd/specific-port 0.92
147 TestFunctional/parallel/ServiceCmd/List 0.3
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.13
151 TestFunctional/parallel/ServiceCmd/Format 0.09
152 TestFunctional/parallel/ServiceCmd/URL 0.09
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 184.5
160 TestMultiControlPlane/serial/DeployApp 5.97
161 TestMultiControlPlane/serial/PingHostFromPods 0.75
162 TestMultiControlPlane/serial/AddWorkerNode 53.23
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.3
165 TestMultiControlPlane/serial/CopyFile 4.28
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.5
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.04
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 1.91
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 1.27
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.41
276 TestNoKubernetes/serial/Stop 3.13
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
293 TestStartStop/group/old-k8s-version/serial/Stop 2.89
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
306 TestStartStop/group/no-preload/serial/Stop 3.92
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
311 TestStartStop/group/embed-certs/serial/Stop 3.56
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.1
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
333 TestStartStop/group/newest-cni/serial/Stop 3.31
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 09:55:26.325209    2039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0927 09:55:26.325500    2039 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-196000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-196000: exit status 85 (100.463375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-196000 | jenkins | v1.34.0 | 27 Sep 24 09:54 PDT |          |
	|         | -p download-only-196000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 09:54:58
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 09:54:58.791854    2040 out.go:345] Setting OutFile to fd 1 ...
	I0927 09:54:58.791998    2040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:54:58.792002    2040 out.go:358] Setting ErrFile to fd 2...
	I0927 09:54:58.792005    2040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:54:58.792126    2040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	W0927 09:54:58.792233    2040 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19712-1508/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19712-1508/.minikube/config/config.json: no such file or directory
	I0927 09:54:58.793499    2040 out.go:352] Setting JSON to true
	I0927 09:54:58.810578    2040 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1462,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 09:54:58.810641    2040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 09:54:58.816428    2040 out.go:97] [download-only-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 09:54:58.816559    2040 notify.go:220] Checking for updates...
	W0927 09:54:58.816620    2040 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 09:54:58.819455    2040 out.go:169] MINIKUBE_LOCATION=19712
	I0927 09:54:58.826403    2040 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 09:54:58.831425    2040 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 09:54:58.835414    2040 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 09:54:58.837007    2040 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	W0927 09:54:58.843401    2040 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 09:54:58.843634    2040 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 09:54:58.848512    2040 out.go:97] Using the qemu2 driver based on user configuration
	I0927 09:54:58.848536    2040 start.go:297] selected driver: qemu2
	I0927 09:54:58.848553    2040 start.go:901] validating driver "qemu2" against <nil>
	I0927 09:54:58.848638    2040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 09:54:58.852412    2040 out.go:169] Automatically selected the socket_vmnet network
	I0927 09:54:58.858195    2040 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0927 09:54:58.858323    2040 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 09:54:58.858369    2040 cni.go:84] Creating CNI manager for ""
	I0927 09:54:58.858420    2040 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 09:54:58.858473    2040 start.go:340] cluster config:
	{Name:download-only-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 09:54:58.864044    2040 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 09:54:58.867431    2040 out.go:97] Downloading VM boot image ...
	I0927 09:54:58.867457    2040 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0927 09:55:14.187682    2040 out.go:97] Starting "download-only-196000" primary control-plane node in "download-only-196000" cluster
	I0927 09:55:14.187711    2040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 09:55:14.250400    2040 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 09:55:14.250423    2040 cache.go:56] Caching tarball of preloaded images
	I0927 09:55:14.250604    2040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 09:55:14.253765    2040 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 09:55:14.253771    2040 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0927 09:55:14.343117    2040 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0927 09:55:25.021652    2040 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0927 09:55:25.021849    2040 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0927 09:55:25.717868    2040 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0927 09:55:25.718068    2040 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/download-only-196000/config.json ...
	I0927 09:55:25.718085    2040 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/download-only-196000/config.json: {Name:mk1b8dc3dd5838cefe8bb7629d424dc90e128c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 09:55:25.718361    2040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 09:55:25.718555    2040 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0927 09:55:26.285258    2040 out.go:193] 
	W0927 09:55:26.291303    2040 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19712-1508/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0 0x108d516c0] Decompressors:map[bz2:0x1400015fd00 gz:0x1400015fd08 tar:0x1400015fc40 tar.bz2:0x1400015fc50 tar.gz:0x1400015fc60 tar.xz:0x1400015fc90 tar.zst:0x1400015fce0 tbz2:0x1400015fc50 tgz:0x1400015fc60 txz:0x1400015fc90 tzst:0x1400015fce0 xz:0x1400015fd10 zip:0x1400015fd20 zst:0x1400015fd18] Getters:map[file:0x1400136e8a0 http:0x140000b8c30 https:0x140000b8dc0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0927 09:55:26.291329    2040 out_reason.go:110] 
	W0927 09:55:26.297185    2040 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 09:55:26.301170    2040 out.go:193] 
	
	
	* The control-plane node download-only-196000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-196000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-196000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-992000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-992000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (8.431019666s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 09:55:35.115201    2039 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 09:55:35.115264    2039 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-992000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-992000: exit status 85 (76.25625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-196000 | jenkins | v1.34.0 | 27 Sep 24 09:54 PDT |                     |
	|         | -p download-only-196000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| delete  | -p download-only-196000        | download-only-196000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT | 27 Sep 24 09:55 PDT |
	| start   | -o=json --download-only        | download-only-992000 | jenkins | v1.34.0 | 27 Sep 24 09:55 PDT |                     |
	|         | -p download-only-992000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 09:55:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 09:55:26.711852    2071 out.go:345] Setting OutFile to fd 1 ...
	I0927 09:55:26.711966    2071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:55:26.711968    2071 out.go:358] Setting ErrFile to fd 2...
	I0927 09:55:26.711978    2071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 09:55:26.712116    2071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 09:55:26.713235    2071 out.go:352] Setting JSON to true
	I0927 09:55:26.729249    2071 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1490,"bootTime":1727454636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 09:55:26.729309    2071 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 09:55:26.734159    2071 out.go:97] [download-only-992000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 09:55:26.734249    2071 notify.go:220] Checking for updates...
	I0927 09:55:26.738198    2071 out.go:169] MINIKUBE_LOCATION=19712
	I0927 09:55:26.741269    2071 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 09:55:26.745201    2071 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 09:55:26.748206    2071 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 09:55:26.751101    2071 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	W0927 09:55:26.757196    2071 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 09:55:26.757409    2071 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 09:55:26.758971    2071 out.go:97] Using the qemu2 driver based on user configuration
	I0927 09:55:26.758981    2071 start.go:297] selected driver: qemu2
	I0927 09:55:26.758985    2071 start.go:901] validating driver "qemu2" against <nil>
	I0927 09:55:26.759037    2071 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 09:55:26.762105    2071 out.go:169] Automatically selected the socket_vmnet network
	I0927 09:55:26.767401    2071 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0927 09:55:26.767504    2071 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 09:55:26.767522    2071 cni.go:84] Creating CNI manager for ""
	I0927 09:55:26.767558    2071 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 09:55:26.767564    2071 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 09:55:26.767614    2071 start.go:340] cluster config:
	{Name:download-only-992000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 09:55:26.771092    2071 iso.go:125] acquiring lock: {Name:mk1cc11176ebac73500ceab74c7296f37e6349a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 09:55:26.774216    2071 out.go:97] Starting "download-only-992000" primary control-plane node in "download-only-992000" cluster
	I0927 09:55:26.774225    2071 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 09:55:26.829251    2071 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0927 09:55:26.829266    2071 cache.go:56] Caching tarball of preloaded images
	I0927 09:55:26.829433    2071 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 09:55:26.834553    2071 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 09:55:26.834562    2071 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0927 09:55:26.918857    2071 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19712-1508/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-992000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-992000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-992000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-289000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-289000: exit status 85 (55.810291ms)

                                                
                                                
-- stdout --
	* Profile "addons-289000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-289000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-289000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-289000: exit status 85 (59.205791ms)

                                                
                                                
-- stdout --
	* Profile "addons-289000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-289000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (197.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-289000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-289000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m17.78301275s)
--- PASS: TestAddons/Setup (197.78s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 8.904416ms
addons_test.go:835: volcano-scheduler stabilized in 8.993875ms
addons_test.go:843: volcano-admission stabilized in 9.025791ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-fxb9d" [0cc00b17-b32e-41dd-be3b-d9fce48607b6] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005874375s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-plk5m" [7196b708-036c-4a6a-a131-574f24e97c14] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002846625s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-v52q7" [cfa85fa5-3d50-41b1-9298-88abcd955c00] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005139041s
addons_test.go:870: (dbg) Run:  kubectl --context addons-289000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-289000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-289000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2af0e34d-a408-42fe-91e3-1350a793668d] Pending
helpers_test.go:344: "test-job-nginx-0" [2af0e34d-a408-42fe-91e3-1350a793668d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2af0e34d-a408-42fe-91e3-1350a793668d] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004771291s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-289000 addons disable volcano --alsologtostderr -v=1: (10.634237625s)
--- PASS: TestAddons/serial/Volcano (38.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-289000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-289000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-289000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-289000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-289000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4df65dc7-e657-4094-894b-dc4ddc7ff03c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4df65dc7-e657-4094-894b-dc4ddc7ff03c] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003659917s
I0927 10:09:14.784237    2039 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-289000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-289000 addons disable ingress --alsologtostderr -v=1: (7.21098825s)
--- PASS: TestAddons/parallel/Ingress (17.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8pdp8" [f651f62d-5ada-47d3-91de-9f28eaefba2d] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004247334s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-289000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-289000: (5.255940625s)
--- PASS: TestAddons/parallel/InspektorGadget (10.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.394084ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-tvxxb" [d00dfe12-41ce-4d1d-bddb-977193e314d9] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.011455458s
addons_test.go:413: (dbg) Run:  kubectl --context addons-289000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 10:08:50.332672    2039 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 10:08:50.335132    2039 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 10:08:50.335142    2039 kapi.go:107] duration metric: took 2.497041ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 2.500875ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-289000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-289000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0ec51277-b1bf-4bcb-b2be-527d63322948] Pending
helpers_test.go:344: "task-pv-pod" [0ec51277-b1bf-4bcb-b2be-527d63322948] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0ec51277-b1bf-4bcb-b2be-527d63322948] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004262208s
addons_test.go:528: (dbg) Run:  kubectl --context addons-289000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-289000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-289000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-289000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-289000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-289000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-289000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [aed00cee-fab5-4757-8ebf-60318934c027] Pending
helpers_test.go:344: "task-pv-pod-restore" [aed00cee-fab5-4757-8ebf-60318934c027] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [aed00cee-fab5-4757-8ebf-60318934c027] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00746s
addons_test.go:570: (dbg) Run:  kubectl --context addons-289000 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-289000 delete pod task-pv-pod-restore: (1.104101625s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-289000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-289000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-289000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.126571125s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-289000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-xl5sc" [a85936f4-ce09-43a5-9d86-e7176b5c5d43] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-xl5sc" [a85936f4-ce09-43a5-9d86-e7176b5c5d43] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.013486375s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-289000 addons disable headlamp --alsologtostderr -v=1: (5.251861958s)
--- PASS: TestAddons/parallel/Headlamp (16.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-h4rtk" [7468aa80-a725-44ba-8406-0712dafb5d70] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00574275s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-289000
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-289000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-289000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-289000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [53c1368c-c1a8-48df-9164-91b838e50b2e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [53c1368c-c1a8-48df-9164-91b838e50b2e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [53c1368c-c1a8-48df-9164-91b838e50b2e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010040625s
addons_test.go:938: (dbg) Run:  kubectl --context addons-289000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 ssh "cat /opt/local-path-provisioner/pvc-f084ad39-b9a4-43f9-bfcc-54549c24f9b6_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-289000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-289000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-darwin-arm64 -p addons-289000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.494234167s)
--- PASS: TestAddons/parallel/LocalPath (41.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xt8td" [08fa369e-90fd-4647-80fc-7b8e9368fb62] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010778958s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-289000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-8f2jc" [87add40a-cd4a-41b4-991a-12e579dc9aeb] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008092834s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-289000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-289000 addons disable yakd --alsologtostderr -v=1: (5.251364375s)
--- PASS: TestAddons/parallel/Yakd (11.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-289000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-289000: (12.202841625s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-289000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-289000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-289000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0927 10:34:52.338249    2039 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 10:34:52.338461    2039 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0927 10:34:54.311135    2039 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0927 10:34:54.311383    2039 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0927 10:34:54.311430    2039 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit
I0927 10:34:54.816888    2039 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40 0x1043c2d40] Decompressors:map[bz2:0x140004ff6f0 gz:0x140004ff6f8 tar:0x140004ff6a0 tar.bz2:0x140004ff6b0 tar.gz:0x140004ff6c0 tar.xz:0x140004ff6d0 tar.zst:0x140004ff6e0 tbz2:0x140004ff6b0 tgz:0x140004ff6c0 txz:0x140004ff6d0 tzst:0x140004ff6e0 xz:0x140004ff700 zip:0x140004ff710 zst:0x140004ff708] Getters:map[file:0x14001460550 http:0x14000592370 https:0x140005923c0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0927 10:34:54.817031    2039 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1250705980/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                    
x
+
TestErrorSpam/setup (35.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-697000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-697000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 --driver=qemu2 : (35.656943792s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (35.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 status
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (55.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 stop: (3.190849625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 stop: (26.057844083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-697000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-697000 stop: (26.053537584s)
--- PASS: TestErrorSpam/stop (55.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19712-1508/.minikube/files/etc/test/nested/copy/2039/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-513000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-513000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m13.401749875s)
--- PASS: TestFunctional/serial/StartWithProxy (73.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 10:12:42.789995    2039 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-513000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-513000 --alsologtostderr -v=8: (36.080814916s)
functional_test.go:663: soft start took 36.081282916s for "functional-513000" cluster.
I0927 10:13:18.870461    2039 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-513000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-513000 cache add registry.k8s.io/pause:3.1: (1.008067625s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1917657594/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cache add minikube-local-cache-test:functional-513000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-513000 cache add minikube-local-cache-test:functional-513000: (1.333117625s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cache delete minikube-local-cache-test:functional-513000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-513000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (63.810584ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 kubectl -- --context functional-513000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-513000 kubectl -- --context functional-513000 get pods: (2.419901s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.42s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-513000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-513000 get pods: (1.023929167s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-513000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0927 10:13:53.689053    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:53.696807    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:53.709630    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:53.733233    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:53.775179    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:53.858759    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:54.022254    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:54.345843    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:54.989513    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:56.273203    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:13:58.836931    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:14:03.960380    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-513000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.641966166s)
functional_test.go:761: restart took 38.642064458s for "functional-513000" cluster.
I0927 10:14:06.257557    2039 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-513000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd130893963/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-513000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-513000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-513000: exit status 115 (145.825375ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32610 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-513000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 config get cpus: exit status 14 (30.746875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 config get cpus: exit status 14 (28.896875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-513000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-513000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3380: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-513000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-513000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.870708ms)

                                                
                                                
-- stdout --
	* [functional-513000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:15:01.474485    3357 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:15:01.474605    3357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:15:01.474608    3357 out.go:358] Setting ErrFile to fd 2...
	I0927 10:15:01.474610    3357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:15:01.474748    3357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:15:01.475825    3357 out.go:352] Setting JSON to false
	I0927 10:15:01.493366    3357 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2665,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:15:01.493435    3357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:15:01.499086    3357 out.go:177] * [functional-513000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0927 10:15:01.506023    3357 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:15:01.506063    3357 notify.go:220] Checking for updates...
	I0927 10:15:01.513028    3357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:15:01.516032    3357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:15:01.519051    3357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:15:01.522023    3357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:15:01.525061    3357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:15:01.528358    3357 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:15:01.528616    3357 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:15:01.533025    3357 out.go:177] * Using the qemu2 driver based on existing profile
	I0927 10:15:01.540040    3357 start.go:297] selected driver: qemu2
	I0927 10:15:01.540047    3357 start.go:901] validating driver "qemu2" against &{Name:functional-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:15:01.540103    3357 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:15:01.547032    3357 out.go:201] 
	W0927 10:15:01.551066    3357 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 10:15:01.554922    3357 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-513000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-513000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-513000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.847417ms)

                                                
                                                
-- stdout --
	* [functional-513000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 10:15:01.709971    3368 out.go:345] Setting OutFile to fd 1 ...
	I0927 10:15:01.710077    3368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:15:01.710080    3368 out.go:358] Setting ErrFile to fd 2...
	I0927 10:15:01.710082    3368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 10:15:01.710214    3368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
	I0927 10:15:01.711693    3368 out.go:352] Setting JSON to false
	I0927 10:15:01.729443    3368 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2665,"bootTime":1727454636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0927 10:15:01.729532    3368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0927 10:15:01.735100    3368 out.go:177] * [functional-513000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0927 10:15:01.742050    3368 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 10:15:01.742122    3368 notify.go:220] Checking for updates...
	I0927 10:15:01.749008    3368 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	I0927 10:15:01.752063    3368 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0927 10:15:01.755036    3368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 10:15:01.758037    3368 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	I0927 10:15:01.761082    3368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 10:15:01.762840    3368 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 10:15:01.763093    3368 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 10:15:01.767953    3368 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0927 10:15:01.774895    3368 start.go:297] selected driver: qemu2
	I0927 10:15:01.774903    3368 start.go:901] validating driver "qemu2" against &{Name:functional-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 10:15:01.774957    3368 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 10:15:01.781064    3368 out.go:201] 
	W0927 10:15:01.785009    3368 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 10:15:01.788945    3368 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4d7d0e23-a79b-48c4-bed9-2905f3ef1bbe] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004660541s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-513000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-513000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-513000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-513000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0676218c-07b4-489c-8f1e-b40ae63d1f31] Pending
helpers_test.go:344: "sp-pod" [0676218c-07b4-489c-8f1e-b40ae63d1f31] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0927 10:14:34.687001    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [0676218c-07b4-489c-8f1e-b40ae63d1f31] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010630916s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-513000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-513000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-513000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [802f7276-3e0c-4633-8d47-79009353d78a] Pending
helpers_test.go:344: "sp-pod" [802f7276-3e0c-4633-8d47-79009353d78a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [802f7276-3e0c-4633-8d47-79009353d78a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003821625s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-513000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh -n functional-513000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cp functional-513000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2233712225/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh -n functional-513000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh -n functional-513000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2039/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo cat /etc/test/nested/copy/2039/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2039.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo cat /etc/ssl/certs/2039.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2039.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo cat /usr/share/ca-certificates/2039.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/20392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo cat /etc/ssl/certs/20392.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/20392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo cat /usr/share/ca-certificates/20392.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-513000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh "sudo systemctl is-active crio": exit status 1 (113.187666ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-513000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-513000
docker.io/kicbase/echo-server:functional-513000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-513000 image ls --format short --alsologtostderr:
I0927 10:15:03.441757    3399 out.go:345] Setting OutFile to fd 1 ...
I0927 10:15:03.441943    3399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:03.441950    3399 out.go:358] Setting ErrFile to fd 2...
I0927 10:15:03.441953    3399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:03.442097    3399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
I0927 10:15:03.442499    3399 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:03.442558    3399 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:03.443436    3399 ssh_runner.go:195] Run: systemctl --version
I0927 10:15:03.443447    3399 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
I0927 10:15:03.464937    3399 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-513000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-513000 | ef9bd01c5ab87 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| localhost/my-image                          | functional-513000 | 8dd43b429036d | 1.41MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/kicbase/echo-server               | functional-513000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-513000 image ls --format table --alsologtostderr:
I0927 10:15:05.540936    3412 out.go:345] Setting OutFile to fd 1 ...
I0927 10:15:05.541115    3412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:05.541118    3412 out.go:358] Setting ErrFile to fd 2...
I0927 10:15:05.541121    3412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:05.541244    3412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
I0927 10:15:05.541744    3412 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:05.541810    3412 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:05.542699    3412 ssh_runner.go:195] Run: systemctl --version
I0927 10:15:05.542707    3412 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
I0927 10:15:05.564889    3412 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/27 10:15:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-513000 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc
91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-513000"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:latest"],"size":"240000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"8dd43b429036dfe94a84f531f386eaa38d6455496c1d8099b5bd2f3dc54c04f3","repoDigests":[],"repoTags":["localhost/my-image:functional-513000"],"size":"1410000"},{"id":"ef9bd01c5ab87dc2964b9f1458da169342872d5e5592bbbe872a38523ec82f2a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-513000"],"size":"30"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-513000 image ls --format json --alsologtostderr:
I0927 10:15:05.473070    3410 out.go:345] Setting OutFile to fd 1 ...
I0927 10:15:05.473318    3410 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:05.473324    3410 out.go:358] Setting ErrFile to fd 2...
I0927 10:15:05.473327    3410 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:05.473476    3410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
I0927 10:15:05.473945    3410 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:05.474009    3410 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:05.474855    3410 ssh_runner.go:195] Run: systemctl --version
I0927 10:15:05.474864    3410 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
I0927 10:15:05.496773    3410 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-513000 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ef9bd01c5ab87dc2964b9f1458da169342872d5e5592bbbe872a38523ec82f2a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-513000
size: "30"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-513000
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-513000 image ls --format yaml --alsologtostderr:
I0927 10:15:03.506203    3401 out.go:345] Setting OutFile to fd 1 ...
I0927 10:15:03.506342    3401 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:03.506350    3401 out.go:358] Setting ErrFile to fd 2...
I0927 10:15:03.506352    3401 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:03.506480    3401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
I0927 10:15:03.506913    3401 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:03.506972    3401 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:03.507746    3401 ssh_runner.go:195] Run: systemctl --version
I0927 10:15:03.507754    3401 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
I0927 10:15:03.529170    3401 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh pgrep buildkitd: exit status 1 (58.625542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image build -t localhost/my-image:functional-513000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-513000 image build -t localhost/my-image:functional-513000 testdata/build --alsologtostderr: (1.7717075s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-513000 image build -t localhost/my-image:functional-513000 testdata/build --alsologtostderr:
I0927 10:15:03.630389    3405 out.go:345] Setting OutFile to fd 1 ...
I0927 10:15:03.630606    3405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:03.630609    3405 out.go:358] Setting ErrFile to fd 2...
I0927 10:15:03.630615    3405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 10:15:03.630777    3405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19712-1508/.minikube/bin
I0927 10:15:03.631244    3405 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:03.632034    3405 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 10:15:03.632939    3405 ssh_runner.go:195] Run: systemctl --version
I0927 10:15:03.632950    3405 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19712-1508/.minikube/machines/functional-513000/id_rsa Username:docker}
I0927 10:15:03.658366    3405 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3376001316.tar
I0927 10:15:03.658431    3405 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 10:15:03.661966    3405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3376001316.tar
I0927 10:15:03.663504    3405 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3376001316.tar: stat -c "%s %y" /var/lib/minikube/build/build.3376001316.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3376001316.tar': No such file or directory
I0927 10:15:03.663525    3405 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3376001316.tar --> /var/lib/minikube/build/build.3376001316.tar (3072 bytes)
I0927 10:15:03.671409    3405 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3376001316
I0927 10:15:03.674959    3405 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3376001316 -xf /var/lib/minikube/build/build.3376001316.tar
I0927 10:15:03.678618    3405 docker.go:360] Building image: /var/lib/minikube/build/build.3376001316
I0927 10:15:03.678671    3405 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-513000 /var/lib/minikube/build/build.3376001316
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8dd43b429036dfe94a84f531f386eaa38d6455496c1d8099b5bd2f3dc54c04f3 done
#8 naming to localhost/my-image:functional-513000 done
#8 DONE 0.0s
I0927 10:15:05.350771    3405 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-513000 /var/lib/minikube/build/build.3376001316: (1.672113s)
I0927 10:15:05.350847    3405 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3376001316
I0927 10:15:05.354972    3405 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3376001316.tar
I0927 10:15:05.358428    3405 build_images.go:217] Built localhost/my-image:functional-513000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3376001316.tar
I0927 10:15:05.358448    3405 build_images.go:133] succeeded building to: functional-513000
I0927 10:15:05.358452    3405 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.764244667s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-513000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-513000 docker-env) && out/minikube-darwin-arm64 status -p functional-513000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-513000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image load --daemon kicbase/echo-server:functional-513000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image load --daemon kicbase/echo-server:functional-513000 --alsologtostderr
E0927 10:14:14.203684    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-513000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image load --daemon kicbase/echo-server:functional-513000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image save kicbase/echo-server:functional-513000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image rm kicbase/echo-server:functional-513000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-513000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 image save --daemon kicbase/echo-server:functional-513000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-513000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-513000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-513000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-513000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-513000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3216: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-513000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-513000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [015c937f-a677-4160-80c3-ed743efcaabc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [015c937f-a677-4160-80c3-ed743efcaabc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.01070725s
I0927 10:14:28.042811    2039 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-513000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.212.62 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0927 10:14:28.136702    2039 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0927 10:14:28.175298    2039 config.go:182] Loaded profile config "functional-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-513000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-513000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-513000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-xxk2l" [c137c84a-6e55-491a-9309-ac55908f155f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-xxk2l" [c137c84a-6e55-491a-9309-ac55908f155f] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009228542s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "97.311209ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.64575ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "95.119875ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.518541ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port893165578/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727457295029573000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port893165578/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727457295029573000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port893165578/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727457295029573000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port893165578/001/test-1727457295029573000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.488416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 10:14:55.092691    2039 retry.go:31] will retry after 356.681629ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 17:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 17:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 17:14 test-1727457295029573000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh cat /mount-9p/test-1727457295029573000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-513000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [17ff0df9-1ecf-4444-8d30-60db56001a48] Pending
helpers_test.go:344: "busybox-mount" [17ff0df9-1ecf-4444-8d30-60db56001a48] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [17ff0df9-1ecf-4444-8d30-60db56001a48] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [17ff0df9-1ecf-4444-8d30-60db56001a48] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008136041s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-513000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port893165578/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port652676907/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.094583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 10:15:00.186318    2039 retry.go:31] will retry after 447.414174ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port652676907/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh "sudo umount -f /mount-9p": exit status 1 (58.523292ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-513000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port652676907/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 service list -o json
functional_test.go:1494: Took "306.532167ms" to run "out/minikube-darwin-arm64 -p functional-513000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1046373025/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1046373025/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1046373025/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount1: exit status 1 (65.979333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 10:15:01.111183    2039 retry.go:31] will retry after 712.48035ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount3: exit status 1 (66.854084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 10:15:02.016939    2039 retry.go:31] will retry after 818.018983ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-513000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1046373025/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1046373025/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-513000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1046373025/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31163
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-513000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31163
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-513000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-513000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-513000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (184.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-500000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0927 10:15:15.649932    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:16:37.568513    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-500000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m4.304357625s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (184.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-500000 -- rollout status deployment/busybox: (4.487037167s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-6wdrp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-fbp5j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-kmw4d -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-6wdrp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-fbp5j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-kmw4d -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-6wdrp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-fbp5j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-kmw4d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-6wdrp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-6wdrp -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-fbp5j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-fbp5j -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-kmw4d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec busybox-7dff88458-kmw4d -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-500000 -v=7 --alsologtostderr
E0927 10:18:53.679670    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/addons-289000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:12.554404    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:12.562016    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:12.574273    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:12.597290    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:12.640529    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
E0927 10:19:12.723969    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-500000 -v=7 --alsologtostderr: (53.009839041s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
E0927 10:19:12.887391    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-500000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E0927 10:19:13.210103    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp testdata/cp-test.txt ha-500000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test.txt"
E0927 10:19:13.851909    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile4007315994/001/cp-test_ha-500000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000:/home/docker/cp-test.txt ha-500000-m02:/home/docker/cp-test_ha-500000_ha-500000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test_ha-500000_ha-500000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000:/home/docker/cp-test.txt ha-500000-m03:/home/docker/cp-test_ha-500000_ha-500000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test_ha-500000_ha-500000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000:/home/docker/cp-test.txt ha-500000-m04:/home/docker/cp-test_ha-500000_ha-500000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test_ha-500000_ha-500000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp testdata/cp-test.txt ha-500000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile4007315994/001/cp-test_ha-500000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m02:/home/docker/cp-test.txt ha-500000:/home/docker/cp-test_ha-500000-m02_ha-500000.txt
E0927 10:19:15.135403    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test_ha-500000-m02_ha-500000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m02:/home/docker/cp-test.txt ha-500000-m03:/home/docker/cp-test_ha-500000-m02_ha-500000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test_ha-500000-m02_ha-500000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m02:/home/docker/cp-test.txt ha-500000-m04:/home/docker/cp-test_ha-500000-m02_ha-500000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test_ha-500000-m02_ha-500000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp testdata/cp-test.txt ha-500000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile4007315994/001/cp-test_ha-500000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m03:/home/docker/cp-test.txt ha-500000:/home/docker/cp-test_ha-500000-m03_ha-500000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test_ha-500000-m03_ha-500000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m03:/home/docker/cp-test.txt ha-500000-m02:/home/docker/cp-test_ha-500000-m03_ha-500000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test_ha-500000-m03_ha-500000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m03:/home/docker/cp-test.txt ha-500000-m04:/home/docker/cp-test_ha-500000-m03_ha-500000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test_ha-500000-m03_ha-500000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp testdata/cp-test.txt ha-500000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile4007315994/001/cp-test_ha-500000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m04:/home/docker/cp-test.txt ha-500000:/home/docker/cp-test_ha-500000-m04_ha-500000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000 "sudo cat /home/docker/cp-test_ha-500000-m04_ha-500000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m04:/home/docker/cp-test.txt ha-500000-m02:/home/docker/cp-test_ha-500000-m04_ha-500000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m02 "sudo cat /home/docker/cp-test_ha-500000-m04_ha-500000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 cp ha-500000-m04:/home/docker/cp-test.txt ha-500000-m03:/home/docker/cp-test_ha-500000-m04_ha-500000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m04 "sudo cat /home/docker/cp-test.txt"
E0927 10:19:17.697397    2039 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19712-1508/.minikube/profiles/functional-513000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 ssh -n ha-500000-m03 "sudo cat /home/docker/cp-test_ha-500000-m04_ha-500000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.503764209s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-440000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-440000 --output=json --user=testUser: (1.907892875s)
--- PASS: TestJSONOutput/stop/Command (1.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-472000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-472000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.069042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa0b13b9-09b8-4d9f-b199-a5a23d8d6b89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-472000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"866c25cd-57da-4bc1-97b1-f7bb63edd3ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"f9d3af41-00ec-4386-bbf6-d7d07b4168f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig"}}
	{"specversion":"1.0","id":"b412b917-234a-424c-9d03-0b3439bda8e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2c5c4838-e630-43eb-9422-3e59260964bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"22e1208b-4f49-4c22-af58-0f4550382e99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube"}}
	{"specversion":"1.0","id":"cad34e43-15e5-41ed-8a0c-46005ca2744a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b38eba94-c383-40a0-8575-0de7616337bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-472000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-882000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.763875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19712-1508/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19712-1508/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-882000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-882000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.072542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-882000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-882000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.689494625s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.717253167s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-882000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-882000: (3.126634167s)
--- PASS: TestNoKubernetes/serial/Stop (3.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-882000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-882000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.558875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-882000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-882000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-862000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-011000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-011000 --alsologtostderr -v=3: (2.893423625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (51.328375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-011000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-684000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-684000 --alsologtostderr -v=3: (3.920916792s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-684000 -n no-preload-684000: exit status 7 (61.293583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-684000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-936000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-936000 --alsologtostderr -v=3: (3.562942791s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-936000 -n embed-certs-936000: exit status 7 (56.085125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-936000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-488000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-488000 --alsologtostderr -v=3: (2.100050583s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-488000 -n default-k8s-diff-port-488000: exit status 7 (61.719792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-488000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-367000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-367000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-367000 --alsologtostderr -v=3: (3.304861625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (52.7885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-367000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-770000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-770000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-770000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-770000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770000"

                                                
                                                
----------------------- debugLogs end: cilium-770000 [took: 2.184558917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-770000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-770000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-408000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-408000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard